You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by yogesh dhari <yo...@live.com> on 2012/11/22 07:53:41 UTC

HADOOP UPGRADE ISSUE

Hi All,

I am trying upgrading apache hadoop-0.20.2 to hadoop-1.0.4.
I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as were in hadoop-0.20.2.
Now I am starting dfs n mapred using

start-all.sh -upgrade

but namenode and datanode fail to run.

1) Namenode's logs shows::

ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: 
File system image contains an old layout version -18.
An upgrade to version -32 is required.
Please restart NameNode with -upgrade option.
.
.
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: 
File system image contains an old layout version -18.
An upgrade to version -32 is required.
Please restart NameNode with -upgrade option.


2) Datanode's logs shows:: 

WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx
****(  how these file permission showing warnings)*****

2012-11-22 12:05:21,157 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.

Please suggest

Thanks & Regards
Yogesh Kumar


 		 	   		  

Re: HADOOP UPGRADE ISSUE

Posted by Harsh J <ha...@cloudera.com>.
Make sure /opt/hadoop_newdata_dirr is 755 and not 777, and that it is owned
by the user that runs the DataNode (seems to be 'yogesh'). Once your DNs
are up, MR will fix itself.


On Thu, Nov 22, 2012 at 4:05 PM, yogesh dhari <yo...@live.com> wrote:

>  Hey Harsh,
>
> I have took down the instance on NN,
>
> and I started start-all.sh, but it doesn't run all demons.
>
> 1) Log file of DN...
>
> 2012-11-22 15:48:41,817 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
> dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected:
> rwxr-xr-x, while actual: rwxrwxrwx
> 2012-11-22 15:48:41,817 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
> dfs.data.dir are invalid.
> 2012-11-22 15:48:41,817 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2012-11-22 15:48:41,830 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>
> 2) Log file of TT...
>
> ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker
> because java.io.IOException: Call to localhost/127.0.0.1:9001 failed on
> local exception: java.io.IOException: Connection reset by peer
>     at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>     at org.apache.hadoop.mapred.$Proxy5.getProtocolVersion(Unknown Source)
>
> 3) Log file of NN...
> .
> .
> .
> .
> 2012-11-22 15:55:52,101 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:yogesh cause:java.io.IOException: File
> /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> 2012-11-22 15:55:52,102 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 9000, call
> addBlock(/opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/
> jobtracker.info, DFSClient_-971904437, null) from 127.0.0.1:54047: error:
> java.io.IOException: File
> /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> java.io.IOException: File
> /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>     at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:601)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> 2012-11-22 15:55:56,291 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>
>
>
> ------------------------------------------------------------------------------------------------------------------------------------------
>
> Terminal's Output
>
> yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ start-all.sh
> starting namenode, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-namenode-yogesh-Aspire-5738.out
> yogesh@localhost's password:
> localhost: starting datanode, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-datanode-yogesh-Aspire-5738.out
> yogesh@localhost's password:
> localhost: starting secondarynamenode, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-secondarynamenode-yogesh-Aspire-5738.out
> starting jobtracker, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-jobtracker-yogesh-Aspire-5738.out
> yogesh@localhost's password:
> localhost: starting tasktracker, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-tasktracker-yogesh-Aspire-5738.out
>
>
> yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ jps
> 23297 JobTracker
> 23613 Jps
> 23562 TaskTracker
> 22658 NameNode
> 23205 SecondaryNameNode
>
>
>
> yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ stop-all.sh
> stopping jobtracker
> yogesh@localhost's password:
> localhost: no tasktracker to stop
> stopping namenode
> yogesh@localhost's password:
> localhost: no datanode to stop
> yogesh@localhost's password:
> localhost: stopping secondarynamenode
>
>
>
>
> Please suggest
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
> ------------------------------
> From: harsh@cloudera.com
> Date: Thu, 22 Nov 2012 15:18:14 +0530
>
> Subject: Re: HADOOP UPGRADE ISSUE
> To: user@hadoop.apache.org
>
> Ah alright, yes you can take down the shell instance of NN you've started
> by issuing a simple Ctrl+C, and then start it with the other DNs, SNN in
> the background with the regular start-dfs or start-all command. That is
> safe to do.
>
> P.s. Be sure to finalize your upgrade after adequate testing has been
> done. You can do this at a later time.
>
>
> On Thu, Nov 22, 2012 at 3:00 PM, yogesh dhari <yo...@live.com>wrote:
>
>  Hello Harsh..
>
> Thanks for your suggestion :-), I need more guidance over it.
>
> Terminal through I have run command
> hadoop namenode -upgrade (Say Terminal-1)
>
> terminal-1 got stuck at this point.
>
>
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
> ""and still cursor is blinking...""
>
> Should I do stop it using ctrl+c command.  If I do so it will kill the
> process.
>
> If I do open new terminal(say Terminal-2) and run JPS it shows
> 6374 NameNode
> 7615 Jps
>
> As you mentioned to run hadoop fs -ls /
> it shows all stored directories..( on Terminal-2 )
>
> Now. Should I kill the process over Termial-1. What should I do to make it
> complete without any loss of data..
>
> I am attaching screen shot..
>
> Please have a look
>
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
>
>
>
>
> ------------------------------
> From: harsh@cloudera.com
> Date: Thu, 22 Nov 2012 14:14:00 +0530
> Subject: Re: HADOOP UPGRADE ISSUE
> To: user@hadoop.apache.org
>
>
> If your UI is already up, then NN is already functional. The UI merely
> shows that your upgrade is done but has not been manually finalized by you
> (leaving it open for a rollback if needed).
>
> You could try a simple "hadoop fs -ls /" to see if NN is functional, run
> some other regular job based tests of yours, and then finalize the new
> format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade
> permanent (no rollback possible after this).
>
>
> On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com>wrote:
>
>  Thanks Uma,
>
> I used command
> hadoop namenode -upgrade and its started well but got stuck at one point.
>
> 12/11/22 13:06:19 INFO mortbay.log: Started
> SelectChannelConnector@localhost:50070
> 12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
>
> from this point its not showing any progress for past 30 + mins...
>
>
> and Web ui shows
>
> NameNode 'localhost:9000'  Started: Thu Nov 22 13:06:17 IST 2012 Version: 1.0.4,
> r1393290  Compiled: Wed Oct 3 05:13:58 UTC 2012 by hortonfo  Upgrades: Upgrade
> for version -32 has been completed. Upgrade is not finalized.
>
> Please suggest
>
> Regards
> Yogesh Kumar
>
>
>
>
> ------------------------------
> From: maheswara@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: HADOOP UPGRADE ISSUE
> Date: Thu, 22 Nov 2012 07:05:51 +0000
>
>
>  start-all.sh will not carry any arguments to pass to nodes.
> Start with start-dfs.sh
> or start directly namenode with upgrade option. ./hadoop namenode -upgrade
>
> Regards,
> Uma
>  ------------------------------
> *From:* yogesh dhari [yogeshdhari@live.com]
> *Sent:* Thursday, November 22, 2012 12:23 PM
> *To:* hadoop helpforoum
> *Subject:* HADOOP UPGRADE ISSUE
>
>   Hi All,
>
> I am trying upgrading apache *hadoop-0.20.2 to hadoop-1.0.4*.
> I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as
> were in hadoop-0.20.2.
> Now I am starting dfs n mapred using
>
> *start-all.sh -upgrade*
>
> but *namenode *and *datanode* fail to run.
>
> 1) *Namenode's *logs shows::
>
> ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
> .
> .
> ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
>
>
> 2)* Datanode's* logs shows::
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
> dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected:
> rwxr-xr-x, while actual: rwxrwxrwx
> ****(  how these file permission showing warnings)*****
>
> 2012-11-22 12:05:21,157 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
> dfs.data.dir are invalid.
>
> Please suggest
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
>
> --
> Harsh J
>
>
>
>
> --
> Harsh J
>



-- 
Harsh J

Re: HADOOP UPGRADE ISSUE

Posted by Harsh J <ha...@cloudera.com>.
Make sure /opt/hadoop_newdata_dirr is 755 and not 777, and that it is owned
by the user that runs the DataNode (seems to be 'yogesh'). Once your DNs
are up, MR will fix itself.


On Thu, Nov 22, 2012 at 4:05 PM, yogesh dhari <yo...@live.com> wrote:

>  Hey Harsh,
>
> I have took down the instance on NN,
>
> and I started start-all.sh, but it doesn't run all demons.
>
> 1) Log file of DN...
>
> 2012-11-22 15:48:41,817 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
> dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected:
> rwxr-xr-x, while actual: rwxrwxrwx
> 2012-11-22 15:48:41,817 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
> dfs.data.dir are invalid.
> 2012-11-22 15:48:41,817 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2012-11-22 15:48:41,830 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>
> 2) Log file of TT...
>
> ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker
> because java.io.IOException: Call to localhost/127.0.0.1:9001 failed on
> local exception: java.io.IOException: Connection reset by peer
>     at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>     at org.apache.hadoop.mapred.$Proxy5.getProtocolVersion(Unknown Source)
>
> 3) Log file of NN...
> .
> .
> .
> .
> 2012-11-22 15:55:52,101 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:yogesh cause:java.io.IOException: File
> /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> 2012-11-22 15:55:52,102 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 9000, call
> addBlock(/opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/
> jobtracker.info, DFSClient_-971904437, null) from 127.0.0.1:54047: error:
> java.io.IOException: File
> /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> java.io.IOException: File
> /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>     at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:601)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> 2012-11-22 15:55:56,291 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>
>
>
> ------------------------------------------------------------------------------------------------------------------------------------------
>
> Terminal's Output
>
> yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ start-all.sh
> starting namenode, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-namenode-yogesh-Aspire-5738.out
> yogesh@localhost's password:
> localhost: starting datanode, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-datanode-yogesh-Aspire-5738.out
> yogesh@localhost's password:
> localhost: starting secondarynamenode, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-secondarynamenode-yogesh-Aspire-5738.out
> starting jobtracker, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-jobtracker-yogesh-Aspire-5738.out
> yogesh@localhost's password:
> localhost: starting tasktracker, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-tasktracker-yogesh-Aspire-5738.out
>
>
> yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ jps
> 23297 JobTracker
> 23613 Jps
> 23562 TaskTracker
> 22658 NameNode
> 23205 SecondaryNameNode
>
>
>
> yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ stop-all.sh
> stopping jobtracker
> yogesh@localhost's password:
> localhost: no tasktracker to stop
> stopping namenode
> yogesh@localhost's password:
> localhost: no datanode to stop
> yogesh@localhost's password:
> localhost: stopping secondarynamenode
>
>
>
>
> Please suggest
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
> ------------------------------
> From: harsh@cloudera.com
> Date: Thu, 22 Nov 2012 15:18:14 +0530
>
> Subject: Re: HADOOP UPGRADE ISSUE
> To: user@hadoop.apache.org
>
> Ah alright, yes you can take down the shell instance of NN you've started
> by issuing a simple Ctrl+C, and then start it with the other DNs, SNN in
> the background with the regular start-dfs or start-all command. That is
> safe to do.
>
> P.s. Be sure to finalize your upgrade after adequate testing has been
> done. You can do this at a later time.
>
>
> On Thu, Nov 22, 2012 at 3:00 PM, yogesh dhari <yo...@live.com>wrote:
>
>  Hello Harsh..
>
> Thanks for your suggestion :-), I need more guidance over it.
>
> Terminal through I have run command
> hadoop namenode -upgrade (Say Terminal-1)
>
> terminal-1 got stuck at this point.
>
>
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
> ""and still cursor is blinking...""
>
> Should I do stop it using ctrl+c command.  If I do so it will kill the
> process.
>
> If I do open new terminal(say Terminal-2) and run JPS it shows
> 6374 NameNode
> 7615 Jps
>
> As you mentioned to run hadoop fs -ls /
> it shows all stored directories..( on Terminal-2 )
>
> Now. Should I kill the process over Termial-1. What should I do to make it
> complete without any loss of data..
>
> I am attaching screen shot..
>
> Please have a look
>
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
>
>
>
>
> ------------------------------
> From: harsh@cloudera.com
> Date: Thu, 22 Nov 2012 14:14:00 +0530
> Subject: Re: HADOOP UPGRADE ISSUE
> To: user@hadoop.apache.org
>
>
> If your UI is already up, then NN is already functional. The UI merely
> shows that your upgrade is done but has not been manually finalized by you
> (leaving it open for a rollback if needed).
>
> You could try a simple "hadoop fs -ls /" to see if NN is functional, run
> some other regular job based tests of yours, and then finalize the new
> format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade
> permanent (no rollback possible after this).
>
>
> On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com>wrote:
>
>  Thanks Uma,
>
> I used command
> hadoop namenode -upgrade and its started well but got stuck at one point.
>
> 12/11/22 13:06:19 INFO mortbay.log: Started
> SelectChannelConnector@localhost:50070
> 12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
>
> from this point its not showing any progress for past 30 + mins...
>
>
> and Web ui shows
>
> NameNode 'localhost:9000'  Started: Thu Nov 22 13:06:17 IST 2012 Version: 1.0.4,
> r1393290  Compiled: Wed Oct 3 05:13:58 UTC 2012 by hortonfo  Upgrades: Upgrade
> for version -32 has been completed. Upgrade is not finalized.
>
> Please suggest
>
> Regards
> Yogesh Kumar
>
>
>
>
> ------------------------------
> From: maheswara@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: HADOOP UPGRADE ISSUE
> Date: Thu, 22 Nov 2012 07:05:51 +0000
>
>
>  start-all.sh will not carry any arguments to pass to nodes.
> Start with start-dfs.sh
> or start directly namenode with upgrade option. ./hadoop namenode -upgrade
>
> Regards,
> Uma
>  ------------------------------
> *From:* yogesh dhari [yogeshdhari@live.com]
> *Sent:* Thursday, November 22, 2012 12:23 PM
> *To:* hadoop helpforoum
> *Subject:* HADOOP UPGRADE ISSUE
>
>   Hi All,
>
> I am trying upgrading apache *hadoop-0.20.2 to hadoop-1.0.4*.
> I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as
> were in hadoop-0.20.2.
> Now I am starting dfs n mapred using
>
> *start-all.sh -upgrade*
>
> but *namenode *and *datanode* fail to run.
>
> 1) *Namenode's *logs shows::
>
> ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
> .
> .
> ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
>
>
> 2)* Datanode's* logs shows::
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
> dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected:
> rwxr-xr-x, while actual: rwxrwxrwx
> ****(  how these file permission showing warnings)*****
>
> 2012-11-22 12:05:21,157 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
> dfs.data.dir are invalid.
>
> Please suggest
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
>
> --
> Harsh J
>
>
>
>
> --
> Harsh J
>



-- 
Harsh J

Re: HADOOP UPGRADE ISSUE

Posted by Harsh J <ha...@cloudera.com>.
Make sure /opt/hadoop_newdata_dirr is 755 and not 777, and that it is owned
by the user that runs the DataNode (seems to be 'yogesh'). Once your DNs
are up, MR will fix itself.


On Thu, Nov 22, 2012 at 4:05 PM, yogesh dhari <yo...@live.com> wrote:

>  Hey Harsh,
>
> I have took down the instance on NN,
>
> and I started start-all.sh, but it doesn't run all demons.
>
> 1) Log file of DN...
>
> 2012-11-22 15:48:41,817 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
> dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected:
> rwxr-xr-x, while actual: rwxrwxrwx
> 2012-11-22 15:48:41,817 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
> dfs.data.dir are invalid.
> 2012-11-22 15:48:41,817 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2012-11-22 15:48:41,830 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>
> 2) Log file of TT...
>
> ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker
> because java.io.IOException: Call to localhost/127.0.0.1:9001 failed on
> local exception: java.io.IOException: Connection reset by peer
>     at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>     at org.apache.hadoop.mapred.$Proxy5.getProtocolVersion(Unknown Source)
>
> 3) Log file of NN...
> .
> .
> .
> .
> 2012-11-22 15:55:52,101 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:yogesh cause:java.io.IOException: File
> /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> 2012-11-22 15:55:52,102 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 9000, call
> addBlock(/opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/
> jobtracker.info, DFSClient_-971904437, null) from 127.0.0.1:54047: error:
> java.io.IOException: File
> /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> java.io.IOException: File
> /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>     at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:601)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> 2012-11-22 15:55:56,291 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>
>
>
> ------------------------------------------------------------------------------------------------------------------------------------------
>
> Terminal's Output
>
> yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ start-all.sh
> starting namenode, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-namenode-yogesh-Aspire-5738.out
> yogesh@localhost's password:
> localhost: starting datanode, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-datanode-yogesh-Aspire-5738.out
> yogesh@localhost's password:
> localhost: starting secondarynamenode, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-secondarynamenode-yogesh-Aspire-5738.out
> starting jobtracker, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-jobtracker-yogesh-Aspire-5738.out
> yogesh@localhost's password:
> localhost: starting tasktracker, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-tasktracker-yogesh-Aspire-5738.out
>
>
> yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ jps
> 23297 JobTracker
> 23613 Jps
> 23562 TaskTracker
> 22658 NameNode
> 23205 SecondaryNameNode
>
>
>
> yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ stop-all.sh
> stopping jobtracker
> yogesh@localhost's password:
> localhost: no tasktracker to stop
> stopping namenode
> yogesh@localhost's password:
> localhost: no datanode to stop
> yogesh@localhost's password:
> localhost: stopping secondarynamenode
>
>
>
>
> Please suggest
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
> ------------------------------
> From: harsh@cloudera.com
> Date: Thu, 22 Nov 2012 15:18:14 +0530
>
> Subject: Re: HADOOP UPGRADE ISSUE
> To: user@hadoop.apache.org
>
> Ah alright, yes you can take down the shell instance of NN you've started
> by issuing a simple Ctrl+C, and then start it with the other DNs, SNN in
> the background with the regular start-dfs or start-all command. That is
> safe to do.
>
> P.s. Be sure to finalize your upgrade after adequate testing has been
> done. You can do this at a later time.
>
>
> On Thu, Nov 22, 2012 at 3:00 PM, yogesh dhari <yo...@live.com>wrote:
>
>  Hello Harsh..
>
> Thanks for your suggestion :-), I need more guidance over it.
>
> Terminal through I have run command
> hadoop namenode -upgrade (Say Terminal-1)
>
> terminal-1 got stuck at this point.
>
>
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
> ""and still cursor is blinking...""
>
> Should I do stop it using ctrl+c command.  If I do so it will kill the
> process.
>
> If I do open new terminal(say Terminal-2) and run JPS it shows
> 6374 NameNode
> 7615 Jps
>
> As you mentioned to run hadoop fs -ls /
> it shows all stored directories..( on Terminal-2 )
>
> Now. Should I kill the process over Termial-1. What should I do to make it
> complete without any loss of data..
>
> I am attaching screen shot..
>
> Please have a look
>
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
>
>
>
>
> ------------------------------
> From: harsh@cloudera.com
> Date: Thu, 22 Nov 2012 14:14:00 +0530
> Subject: Re: HADOOP UPGRADE ISSUE
> To: user@hadoop.apache.org
>
>
> If your UI is already up, then NN is already functional. The UI merely
> shows that your upgrade is done but has not been manually finalized by you
> (leaving it open for a rollback if needed).
>
> You could try a simple "hadoop fs -ls /" to see if NN is functional, run
> some other regular job based tests of yours, and then finalize the new
> format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade
> permanent (no rollback possible after this).
>
>
> On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com>wrote:
>
>  Thanks Uma,
>
> I used command
> hadoop namenode -upgrade and its started well but got stuck at one point.
>
> 12/11/22 13:06:19 INFO mortbay.log: Started
> SelectChannelConnector@localhost:50070
> 12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
>
> from this point its not showing any progress for past 30 + mins...
>
>
> and Web ui shows
>
> NameNode 'localhost:9000'  Started: Thu Nov 22 13:06:17 IST 2012 Version: 1.0.4,
> r1393290  Compiled: Wed Oct 3 05:13:58 UTC 2012 by hortonfo  Upgrades: Upgrade
> for version -32 has been completed. Upgrade is not finalized.
>
> Please suggest
>
> Regards
> Yogesh Kumar
>
>
>
>
> ------------------------------
> From: maheswara@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: HADOOP UPGRADE ISSUE
> Date: Thu, 22 Nov 2012 07:05:51 +0000
>
>
>  start-all.sh will not carry any arguments to pass to nodes.
> Start with start-dfs.sh
> or start directly namenode with upgrade option. ./hadoop namenode -upgrade
>
> Regards,
> Uma
>  ------------------------------
> *From:* yogesh dhari [yogeshdhari@live.com]
> *Sent:* Thursday, November 22, 2012 12:23 PM
> *To:* hadoop helpforoum
> *Subject:* HADOOP UPGRADE ISSUE
>
>   Hi All,
>
> I am trying upgrading apache *hadoop-0.20.2 to hadoop-1.0.4*.
> I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as
> were in hadoop-0.20.2.
> Now I am starting dfs n mapred using
>
> *start-all.sh -upgrade*
>
> but *namenode *and *datanode* fail to run.
>
> 1) *Namenode's *logs shows::
>
> ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
> .
> .
> ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
>
>
> 2)* Datanode's* logs shows::
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
> dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected:
> rwxr-xr-x, while actual: rwxrwxrwx
> ****(  how these file permission showing warnings)*****
>
> 2012-11-22 12:05:21,157 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
> dfs.data.dir are invalid.
>
> Please suggest
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
>
> --
> Harsh J
>
>
>
>
> --
> Harsh J
>



-- 
Harsh J

Re: HADOOP UPGRADE ISSUE

Posted by Harsh J <ha...@cloudera.com>.
Make sure /opt/hadoop_newdata_dirr is 755 and not 777, and that it is owned
by the user that runs the DataNode (seems to be 'yogesh'). Once your DNs
are up, MR will fix itself.


On Thu, Nov 22, 2012 at 4:05 PM, yogesh dhari <yo...@live.com> wrote:

>  Hey Harsh,
>
> I have took down the instance on NN,
>
> and I started start-all.sh, but it doesn't run all demons.
>
> 1) Log file of DN...
>
> 2012-11-22 15:48:41,817 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
> dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected:
> rwxr-xr-x, while actual: rwxrwxrwx
> 2012-11-22 15:48:41,817 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
> dfs.data.dir are invalid.
> 2012-11-22 15:48:41,817 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2012-11-22 15:48:41,830 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>
> 2) Log file of TT...
>
> ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker
> because java.io.IOException: Call to localhost/127.0.0.1:9001 failed on
> local exception: java.io.IOException: Connection reset by peer
>     at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>     at org.apache.hadoop.mapred.$Proxy5.getProtocolVersion(Unknown Source)
>
> 3) Log file of NN...
> .
> .
> .
> .
> 2012-11-22 15:55:52,101 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:yogesh cause:java.io.IOException: File
> /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> 2012-11-22 15:55:52,102 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 9000, call
> addBlock(/opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/
> jobtracker.info, DFSClient_-971904437, null) from 127.0.0.1:54047: error:
> java.io.IOException: File
> /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
> java.io.IOException: File
> /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.infocould only be replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>     at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:601)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> 2012-11-22 15:55:56,291 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>
>
>
> ------------------------------------------------------------------------------------------------------------------------------------------
>
> Terminal's Output
>
> yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ start-all.sh
> starting namenode, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-namenode-yogesh-Aspire-5738.out
> yogesh@localhost's password:
> localhost: starting datanode, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-datanode-yogesh-Aspire-5738.out
> yogesh@localhost's password:
> localhost: starting secondarynamenode, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-secondarynamenode-yogesh-Aspire-5738.out
> starting jobtracker, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-jobtracker-yogesh-Aspire-5738.out
> yogesh@localhost's password:
> localhost: starting tasktracker, logging to
> /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-tasktracker-yogesh-Aspire-5738.out
>
>
> yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ jps
> 23297 JobTracker
> 23613 Jps
> 23562 TaskTracker
> 22658 NameNode
> 23205 SecondaryNameNode
>
>
>
> yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ stop-all.sh
> stopping jobtracker
> yogesh@localhost's password:
> localhost: no tasktracker to stop
> stopping namenode
> yogesh@localhost's password:
> localhost: no datanode to stop
> yogesh@localhost's password:
> localhost: stopping secondarynamenode
>
>
>
>
> Please suggest
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
> ------------------------------
> From: harsh@cloudera.com
> Date: Thu, 22 Nov 2012 15:18:14 +0530
>
> Subject: Re: HADOOP UPGRADE ISSUE
> To: user@hadoop.apache.org
>
> Ah alright, yes you can take down the shell instance of NN you've started
> by issuing a simple Ctrl+C, and then start it with the other DNs, SNN in
> the background with the regular start-dfs or start-all command. That is
> safe to do.
>
> P.s. Be sure to finalize your upgrade after adequate testing has been
> done. You can do this at a later time.
>
>
> On Thu, Nov 22, 2012 at 3:00 PM, yogesh dhari <yo...@live.com>wrote:
>
>  Hello Harsh..
>
> Thanks for your suggestion :-), I need more guidance over it.
>
> Terminal through I have run command
> hadoop namenode -upgrade (Say Terminal-1)
>
> terminal-1 got stuck at this point.
>
>
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
> ""and still cursor is blinking...""
>
> Should I do stop it using ctrl+c command.  If I do so it will kill the
> process.
>
> If I do open new terminal(say Terminal-2) and run JPS it shows
> 6374 NameNode
> 7615 Jps
>
> As you mentioned to run hadoop fs -ls /
> it shows all stored directories..( on Terminal-2 )
>
> Now. Should I kill the process over Termial-1. What should I do to make it
> complete without any loss of data..
>
> I am attaching screen shot..
>
> Please have a look
>
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
>
>
>
>
> ------------------------------
> From: harsh@cloudera.com
> Date: Thu, 22 Nov 2012 14:14:00 +0530
> Subject: Re: HADOOP UPGRADE ISSUE
> To: user@hadoop.apache.org
>
>
> If your UI is already up, then NN is already functional. The UI merely
> shows that your upgrade is done but has not been manually finalized by you
> (leaving it open for a rollback if needed).
>
> You could try a simple "hadoop fs -ls /" to see if NN is functional, run
> some other regular job based tests of yours, and then finalize the new
> format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade
> permanent (no rollback possible after this).
>
>
> On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com>wrote:
>
>  Thanks Uma,
>
> I used command
> hadoop namenode -upgrade and its started well but got stuck at one point.
>
> 12/11/22 13:06:19 INFO mortbay.log: Started
> SelectChannelConnector@localhost:50070
> 12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
>
> from this point its not showing any progress for past 30 + mins...
>
>
> and Web ui shows
>
> NameNode 'localhost:9000'  Started: Thu Nov 22 13:06:17 IST 2012 Version: 1.0.4,
> r1393290  Compiled: Wed Oct 3 05:13:58 UTC 2012 by hortonfo  Upgrades: Upgrade
> for version -32 has been completed. Upgrade is not finalized.
>
> Please suggest
>
> Regards
> Yogesh Kumar
>
>
>
>
> ------------------------------
> From: maheswara@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: HADOOP UPGRADE ISSUE
> Date: Thu, 22 Nov 2012 07:05:51 +0000
>
>
>  start-all.sh will not carry any arguments to pass to nodes.
> Start with start-dfs.sh
> or start directly namenode with upgrade option. ./hadoop namenode -upgrade
>
> Regards,
> Uma
>  ------------------------------
> *From:* yogesh dhari [yogeshdhari@live.com]
> *Sent:* Thursday, November 22, 2012 12:23 PM
> *To:* hadoop helpforoum
> *Subject:* HADOOP UPGRADE ISSUE
>
>   Hi All,
>
> I am trying upgrading apache *hadoop-0.20.2 to hadoop-1.0.4*.
> I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as
> were in hadoop-0.20.2.
> Now I am starting dfs n mapred using
>
> *start-all.sh -upgrade*
>
> but *namenode *and *datanode* fail to run.
>
> 1) *Namenode's *logs shows::
>
> ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
> .
> .
> ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
>
>
> 2)* Datanode's* logs shows::
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
> dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected:
> rwxr-xr-x, while actual: rwxrwxrwx
> ****(  how these file permission showing warnings)*****
>
> 2012-11-22 12:05:21,157 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
> dfs.data.dir are invalid.
>
> Please suggest
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
>
> --
> Harsh J
>
>
>
>
> --
> Harsh J
>



-- 
Harsh J

RE: HADOOP UPGRADE ISSUE

Posted by yogesh dhari <yo...@live.com>.
Hey Harsh,

I have took down the instance on NN,

and I started start-all.sh, but it doesn't run all demons.

1) Log file of DN...

2012-11-22 15:48:41,817 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx
2012-11-22 15:48:41,817 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.
2012-11-22 15:48:41,817 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2012-11-22 15:48:41,830 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 

2) Log file of TT...

ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Call to localhost/127.0.0.1:9001 failed on local exception: java.io.IOException: Connection reset by peer
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
    at org.apache.hadoop.ipc.Client.call(Client.java:1075)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at org.apache.hadoop.mapred.$Proxy5.getProtocolVersion(Unknown Source)

3) Log file of NN...
.
.
.
.
2012-11-22 15:55:52,101 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:yogesh cause:java.io.IOException: File /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
2012-11-22 15:55:52,102 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000, call addBlock(/opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.info, DFSClient_-971904437, null) from 127.0.0.1:54047: error: java.io.IOException: File /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:601)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2012-11-22 15:55:56,291 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 


------------------------------------------------------------------------------------------------------------------------------------------

Terminal's Output

yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ start-all.sh
starting namenode, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-namenode-yogesh-Aspire-5738.out
yogesh@localhost's password: 
localhost: starting datanode, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-datanode-yogesh-Aspire-5738.out
yogesh@localhost's password: 
localhost: starting secondarynamenode, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-secondarynamenode-yogesh-Aspire-5738.out
starting jobtracker, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-jobtracker-yogesh-Aspire-5738.out
yogesh@localhost's password: 
localhost: starting tasktracker, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-tasktracker-yogesh-Aspire-5738.out


yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ jps
23297 JobTracker
23613 Jps
23562 TaskTracker
22658 NameNode
23205 SecondaryNameNode



yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ stop-all.sh
stopping jobtracker
yogesh@localhost's password: 
localhost: no tasktracker to stop
stopping namenode
yogesh@localhost's password: 
localhost: no datanode to stop
yogesh@localhost's password: 
localhost: stopping secondarynamenode



Please suggest 

Thanks & Regards
Yogesh Kumar




From: harsh@cloudera.com
Date: Thu, 22 Nov 2012 15:18:14 +0530
Subject: Re: HADOOP UPGRADE ISSUE
To: user@hadoop.apache.org

Ah alright, yes you can take down the shell instance of NN you've started by issuing a simple Ctrl+C, and then start it with the other DNs, SNN in the background with the regular start-dfs or start-all command. That is safe to do.



P.s. Be sure to finalize your upgrade after adequate testing has been done. You can do this at a later time.

On Thu, Nov 22, 2012 at 3:00 PM, yogesh dhari <yo...@live.com> wrote:







Hello Harsh..

Thanks for your suggestion :-), I need more guidance over it.

Terminal through I have run command 
hadoop namenode -upgrade (Say Terminal-1)

terminal-1 got stuck at this point.


12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
""and still cursor is blinking...""

Should I do stop it using ctrl+c command.  If I do so it will kill the process.




If I do open new terminal(say Terminal-2) and run JPS it shows
6374 NameNode
7615 Jps

As you mentioned to run hadoop fs -ls /
it shows all stored directories..( on Terminal-2 )
 
Now. Should I kill the process over Termial-1. What should I do to make it complete without any loss of data..




I am attaching screen shot..  

Please have a look

Thanks & Regards
Yogesh Kumar








From: harsh@cloudera.com



Date: Thu, 22 Nov 2012 14:14:00 +0530
Subject: Re: HADOOP UPGRADE ISSUE
To: user@hadoop.apache.org

If your UI is already up, then NN is already functional. The UI merely shows that your upgrade is done but has not been manually finalized by you (leaving it open for a rollback if needed).



You could try a simple "hadoop fs -ls /" to see if NN is functional, run some other regular job based tests of yours, and then finalize the new format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade permanent (no rollback possible after this).






On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com> wrote:






Thanks Uma,

I used command 
hadoop namenode -upgrade and its started well but got stuck at one point.

12/11/22 13:06:19 INFO mortbay.log: Started SelectChannelConnector@localhost:50070
12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070





12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting





12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting





12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting





12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting






from this point its not showing any progress for past 30 + mins...


and Web ui shows 

NameNode 'localhost:9000'


 	  
  Started:  Thu Nov 22 13:06:17 IST 2012
  Version:  1.0.4, r1393290
  Compiled:  Wed Oct  3 05:13:58 UTC 2012 by hortonfo
  Upgrades:  Upgrade for version -32 has been completed.
Upgrade is not finalized.

Please suggest

Regards
Yogesh Kumar




From: maheswara@huawei.com





To: user@hadoop.apache.org
Subject: RE: HADOOP UPGRADE ISSUE
Date: Thu, 22 Nov 2012 07:05:51 +0000








start-all.sh will not carry any arguments to pass to nodes.

Start with start-dfs.sh 

or start directly namenode with upgrade option. ./hadoop namenode -upgrade

 

Regards,

Uma



From: yogesh dhari [yogeshdhari@live.com]

Sent: Thursday, November 22, 2012 12:23 PM

To: hadoop helpforoum

Subject: HADOOP UPGRADE ISSUE






Hi All,



I am trying upgrading apache hadoop-0.20.2 to hadoop-1.0.4.

I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as were in hadoop-0.20.2.

Now I am starting dfs n mapred using



start-all.sh -upgrade



but namenode and datanode fail to run.



1) Namenode's logs shows::



ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.

java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.

.

.

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.





2) Datanode's logs shows:: 



WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx

****(  how these file permission showing warnings)*****



2012-11-22 12:05:21,157 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.



Please suggest



Thanks & Regards

Yogesh Kumar








 		 	   		  


-- 
Harsh J

 		 	   		  


-- 
Harsh J

 		 	   		  

RE: HADOOP UPGRADE ISSUE

Posted by yogesh dhari <yo...@live.com>.
Hey Harsh,

I have took down the instance on NN,

and I started start-all.sh, but it doesn't run all demons.

1) Log file of DN...

2012-11-22 15:48:41,817 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx
2012-11-22 15:48:41,817 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.
2012-11-22 15:48:41,817 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2012-11-22 15:48:41,830 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 

2) Log file of TT...

ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Call to localhost/127.0.0.1:9001 failed on local exception: java.io.IOException: Connection reset by peer
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
    at org.apache.hadoop.ipc.Client.call(Client.java:1075)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at org.apache.hadoop.mapred.$Proxy5.getProtocolVersion(Unknown Source)

3) Log file of NN...
.
.
.
.
2012-11-22 15:55:52,101 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:yogesh cause:java.io.IOException: File /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
2012-11-22 15:55:52,102 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000, call addBlock(/opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.info, DFSClient_-971904437, null) from 127.0.0.1:54047: error: java.io.IOException: File /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:601)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2012-11-22 15:55:56,291 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 


------------------------------------------------------------------------------------------------------------------------------------------

Terminal's Output

yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ start-all.sh
starting namenode, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-namenode-yogesh-Aspire-5738.out
yogesh@localhost's password: 
localhost: starting datanode, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-datanode-yogesh-Aspire-5738.out
yogesh@localhost's password: 
localhost: starting secondarynamenode, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-secondarynamenode-yogesh-Aspire-5738.out
starting jobtracker, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-jobtracker-yogesh-Aspire-5738.out
yogesh@localhost's password: 
localhost: starting tasktracker, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-tasktracker-yogesh-Aspire-5738.out


yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ jps
23297 JobTracker
23613 Jps
23562 TaskTracker
22658 NameNode
23205 SecondaryNameNode



yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ stop-all.sh
stopping jobtracker
yogesh@localhost's password: 
localhost: no tasktracker to stop
stopping namenode
yogesh@localhost's password: 
localhost: no datanode to stop
yogesh@localhost's password: 
localhost: stopping secondarynamenode



Please suggest 

Thanks & Regards
Yogesh Kumar




From: harsh@cloudera.com
Date: Thu, 22 Nov 2012 15:18:14 +0530
Subject: Re: HADOOP UPGRADE ISSUE
To: user@hadoop.apache.org

Ah alright, yes you can take down the shell instance of NN you've started by issuing a simple Ctrl+C, and then start it with the other DNs, SNN in the background with the regular start-dfs or start-all command. That is safe to do.



P.s. Be sure to finalize your upgrade after adequate testing has been done. You can do this at a later time.

On Thu, Nov 22, 2012 at 3:00 PM, yogesh dhari <yo...@live.com> wrote:







Hello Harsh..

Thanks for your suggestion :-), I need more guidance over it.

Terminal through I have run command 
hadoop namenode -upgrade (Say Terminal-1)

terminal-1 got stuck at this point.


12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
""and still cursor is blinking...""

Should I do stop it using ctrl+c command.  If I do so it will kill the process.




If I do open new terminal(say Terminal-2) and run JPS it shows
6374 NameNode
7615 Jps

As you mentioned to run hadoop fs -ls /
it shows all stored directories..( on Terminal-2 )
 
Now. Should I kill the process over Termial-1. What should I do to make it complete without any loss of data..




I am attaching screen shot..  

Please have a look

Thanks & Regards
Yogesh Kumar








From: harsh@cloudera.com



Date: Thu, 22 Nov 2012 14:14:00 +0530
Subject: Re: HADOOP UPGRADE ISSUE
To: user@hadoop.apache.org

If your UI is already up, then NN is already functional. The UI merely shows that your upgrade is done but has not been manually finalized by you (leaving it open for a rollback if needed).



You could try a simple "hadoop fs -ls /" to see if NN is functional, run some other regular job based tests of yours, and then finalize the new format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade permanent (no rollback possible after this).






On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com> wrote:






Thanks Uma,

I used command 
hadoop namenode -upgrade and its started well but got stuck at one point.

12/11/22 13:06:19 INFO mortbay.log: Started SelectChannelConnector@localhost:50070
12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070





12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting





12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting





12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting





12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting






from this point its not showing any progress for past 30 + mins...


and Web ui shows 

NameNode 'localhost:9000'


 	  
  Started:  Thu Nov 22 13:06:17 IST 2012
  Version:  1.0.4, r1393290
  Compiled:  Wed Oct  3 05:13:58 UTC 2012 by hortonfo
  Upgrades:  Upgrade for version -32 has been completed.
Upgrade is not finalized.

Please suggest

Regards
Yogesh Kumar




From: maheswara@huawei.com





To: user@hadoop.apache.org
Subject: RE: HADOOP UPGRADE ISSUE
Date: Thu, 22 Nov 2012 07:05:51 +0000








start-all.sh will not carry any arguments to pass to nodes.

Start with start-dfs.sh 

or start directly namenode with upgrade option. ./hadoop namenode -upgrade

 

Regards,

Uma



From: yogesh dhari [yogeshdhari@live.com]

Sent: Thursday, November 22, 2012 12:23 PM

To: hadoop helpforoum

Subject: HADOOP UPGRADE ISSUE






Hi All,



I am trying upgrading apache hadoop-0.20.2 to hadoop-1.0.4.

I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as were in hadoop-0.20.2.

Now I am starting dfs n mapred using



start-all.sh -upgrade



but namenode and datanode fail to run.



1) Namenode's logs shows::



ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.

java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.

.

.

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.





2) Datanode's logs shows:: 



WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx

****(  how these file permission showing warnings)*****



2012-11-22 12:05:21,157 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.



Please suggest



Thanks & Regards

Yogesh Kumar








 		 	   		  


-- 
Harsh J

 		 	   		  


-- 
Harsh J

 		 	   		  

RE: HADOOP UPGRADE ISSUE

Posted by yogesh dhari <yo...@live.com>.
Hey Harsh,

I have took down the instance on NN,

and I started start-all.sh, but it doesn't run all demons.

1) Log file of DN...

2012-11-22 15:48:41,817 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx
2012-11-22 15:48:41,817 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.
2012-11-22 15:48:41,817 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2012-11-22 15:48:41,830 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 

2) Log file of TT...

ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Call to localhost/127.0.0.1:9001 failed on local exception: java.io.IOException: Connection reset by peer
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
    at org.apache.hadoop.ipc.Client.call(Client.java:1075)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at org.apache.hadoop.mapred.$Proxy5.getProtocolVersion(Unknown Source)

3) Log file of NN...
.
.
.
.
2012-11-22 15:55:52,101 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:yogesh cause:java.io.IOException: File /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
2012-11-22 15:55:52,102 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000, call addBlock(/opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.info, DFSClient_-971904437, null) from 127.0.0.1:54047: error: java.io.IOException: File /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:601)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2012-11-22 15:55:56,291 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 


------------------------------------------------------------------------------------------------------------------------------------------

Terminal's Output

yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ start-all.sh
starting namenode, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-namenode-yogesh-Aspire-5738.out
yogesh@localhost's password: 
localhost: starting datanode, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-datanode-yogesh-Aspire-5738.out
yogesh@localhost's password: 
localhost: starting secondarynamenode, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-secondarynamenode-yogesh-Aspire-5738.out
starting jobtracker, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-jobtracker-yogesh-Aspire-5738.out
yogesh@localhost's password: 
localhost: starting tasktracker, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-tasktracker-yogesh-Aspire-5738.out


yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ jps
23297 JobTracker
23613 Jps
23562 TaskTracker
22658 NameNode
23205 SecondaryNameNode



yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ stop-all.sh
stopping jobtracker
yogesh@localhost's password: 
localhost: no tasktracker to stop
stopping namenode
yogesh@localhost's password: 
localhost: no datanode to stop
yogesh@localhost's password: 
localhost: stopping secondarynamenode



Please suggest 

Thanks & Regards
Yogesh Kumar




From: harsh@cloudera.com
Date: Thu, 22 Nov 2012 15:18:14 +0530
Subject: Re: HADOOP UPGRADE ISSUE
To: user@hadoop.apache.org

Ah alright, yes you can take down the shell instance of NN you've started by issuing a simple Ctrl+C, and then start it with the other DNs, SNN in the background with the regular start-dfs or start-all command. That is safe to do.



P.s. Be sure to finalize your upgrade after adequate testing has been done. You can do this at a later time.

On Thu, Nov 22, 2012 at 3:00 PM, yogesh dhari <yo...@live.com> wrote:







Hello Harsh..

Thanks for your suggestion :-), I need more guidance over it.

Terminal through I have run command 
hadoop namenode -upgrade (Say Terminal-1)

terminal-1 got stuck at this point.


12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
""and still cursor is blinking...""

Should I do stop it using ctrl+c command.  If I do so it will kill the process.




If I do open new terminal(say Terminal-2) and run JPS it shows
6374 NameNode
7615 Jps

As you mentioned to run hadoop fs -ls /
it shows all stored directories..( on Terminal-2 )
 
Now. Should I kill the process over Termial-1. What should I do to make it complete without any loss of data..




I am attaching screen shot..  

Please have a look

Thanks & Regards
Yogesh Kumar








From: harsh@cloudera.com



Date: Thu, 22 Nov 2012 14:14:00 +0530
Subject: Re: HADOOP UPGRADE ISSUE
To: user@hadoop.apache.org

If your UI is already up, then NN is already functional. The UI merely shows that your upgrade is done but has not been manually finalized by you (leaving it open for a rollback if needed).



You could try a simple "hadoop fs -ls /" to see if NN is functional, run some other regular job based tests of yours, and then finalize the new format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade permanent (no rollback possible after this).






On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com> wrote:






Thanks Uma,

I used command 
hadoop namenode -upgrade and its started well but got stuck at one point.

12/11/22 13:06:19 INFO mortbay.log: Started SelectChannelConnector@localhost:50070
12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070





12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting





12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting





12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting





12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting






from this point its not showing any progress for past 30 + mins...


and Web ui shows 

NameNode 'localhost:9000'


 	  
  Started:  Thu Nov 22 13:06:17 IST 2012
  Version:  1.0.4, r1393290
  Compiled:  Wed Oct  3 05:13:58 UTC 2012 by hortonfo
  Upgrades:  Upgrade for version -32 has been completed.
Upgrade is not finalized.

Please suggest

Regards
Yogesh Kumar




From: maheswara@huawei.com





To: user@hadoop.apache.org
Subject: RE: HADOOP UPGRADE ISSUE
Date: Thu, 22 Nov 2012 07:05:51 +0000








start-all.sh will not carry any arguments to pass to nodes.

Start with start-dfs.sh 

or start directly namenode with upgrade option. ./hadoop namenode -upgrade

 

Regards,

Uma



From: yogesh dhari [yogeshdhari@live.com]

Sent: Thursday, November 22, 2012 12:23 PM

To: hadoop helpforoum

Subject: HADOOP UPGRADE ISSUE






Hi All,



I am trying upgrading apache hadoop-0.20.2 to hadoop-1.0.4.

I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as were in hadoop-0.20.2.

Now I am starting dfs n mapred using



start-all.sh -upgrade



but namenode and datanode fail to run.



1) Namenode's logs shows::



ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.

java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.

.

.

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.





2) Datanode's logs shows:: 



WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx

****(  how these file permission showing warnings)*****



2012-11-22 12:05:21,157 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.



Please suggest



Thanks & Regards

Yogesh Kumar








 		 	   		  


-- 
Harsh J

 		 	   		  


-- 
Harsh J

 		 	   		  

RE: HADOOP UPGRADE ISSUE

Posted by yogesh dhari <yo...@live.com>.
Hey Harsh,

I have took down the instance on NN,

and I started start-all.sh, but it doesn't run all demons.

1) Log file of DN...

2012-11-22 15:48:41,817 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx
2012-11-22 15:48:41,817 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.
2012-11-22 15:48:41,817 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2012-11-22 15:48:41,830 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 

2) Log file of TT...

ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Call to localhost/127.0.0.1:9001 failed on local exception: java.io.IOException: Connection reset by peer
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
    at org.apache.hadoop.ipc.Client.call(Client.java:1075)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at org.apache.hadoop.mapred.$Proxy5.getProtocolVersion(Unknown Source)

3) Log file of NN...
.
.
.
.
2012-11-22 15:55:52,101 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:yogesh cause:java.io.IOException: File /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
2012-11-22 15:55:52,102 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000, call addBlock(/opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.info, DFSClient_-971904437, null) from 127.0.0.1:54047: error: java.io.IOException: File /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /opt/hadoop-0.20.2/hadoop_temporary_dirr/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:601)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2012-11-22 15:55:56,291 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 


------------------------------------------------------------------------------------------------------------------------------------------

Terminal's Output

yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ start-all.sh
starting namenode, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-namenode-yogesh-Aspire-5738.out
yogesh@localhost's password: 
localhost: starting datanode, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-datanode-yogesh-Aspire-5738.out
yogesh@localhost's password: 
localhost: starting secondarynamenode, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-secondarynamenode-yogesh-Aspire-5738.out
starting jobtracker, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-jobtracker-yogesh-Aspire-5738.out
yogesh@localhost's password: 
localhost: starting tasktracker, logging to /opt/hadoop-1.0.4/libexec/../logs/hadoop-yogesh-tasktracker-yogesh-Aspire-5738.out


yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ jps
23297 JobTracker
23613 Jps
23562 TaskTracker
22658 NameNode
23205 SecondaryNameNode



yogesh@yogesh-Aspire-5738:/opt/hadoop-1.0.4$ stop-all.sh
stopping jobtracker
yogesh@localhost's password: 
localhost: no tasktracker to stop
stopping namenode
yogesh@localhost's password: 
localhost: no datanode to stop
yogesh@localhost's password: 
localhost: stopping secondarynamenode



Please suggest 

Thanks & Regards
Yogesh Kumar




From: harsh@cloudera.com
Date: Thu, 22 Nov 2012 15:18:14 +0530
Subject: Re: HADOOP UPGRADE ISSUE
To: user@hadoop.apache.org

Ah alright, yes you can take down the shell instance of NN you've started by issuing a simple Ctrl+C, and then start it with the other DNs, SNN in the background with the regular start-dfs or start-all command. That is safe to do.



P.s. Be sure to finalize your upgrade after adequate testing has been done. You can do this at a later time.

On Thu, Nov 22, 2012 at 3:00 PM, yogesh dhari <yo...@live.com> wrote:







Hello Harsh..

Thanks for your suggestion :-), I need more guidance over it.

Terminal through I have run command 
hadoop namenode -upgrade (Say Terminal-1)

terminal-1 got stuck at this point.


12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
""and still cursor is blinking...""

Should I do stop it using ctrl+c command.  If I do so it will kill the process.




If I do open new terminal(say Terminal-2) and run JPS it shows
6374 NameNode
7615 Jps

As you mentioned to run hadoop fs -ls /
it shows all stored directories..( on Terminal-2 )
 
Now. Should I kill the process over Termial-1. What should I do to make it complete without any loss of data..




I am attaching screen shot..  

Please have a look

Thanks & Regards
Yogesh Kumar








From: harsh@cloudera.com



Date: Thu, 22 Nov 2012 14:14:00 +0530
Subject: Re: HADOOP UPGRADE ISSUE
To: user@hadoop.apache.org

If your UI is already up, then NN is already functional. The UI merely shows that your upgrade is done but has not been manually finalized by you (leaving it open for a rollback if needed).



You could try a simple "hadoop fs -ls /" to see if NN is functional, run some other regular job based tests of yours, and then finalize the new format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade permanent (no rollback possible after this).






On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com> wrote:






Thanks Uma,

I used command 
hadoop namenode -upgrade and its started well but got stuck at one point.

12/11/22 13:06:19 INFO mortbay.log: Started SelectChannelConnector@localhost:50070
12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070





12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting





12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting





12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting





12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting






from this point its not showing any progress for past 30 + mins...


and Web ui shows 

NameNode 'localhost:9000'


 	  
  Started:  Thu Nov 22 13:06:17 IST 2012
  Version:  1.0.4, r1393290
  Compiled:  Wed Oct  3 05:13:58 UTC 2012 by hortonfo
  Upgrades:  Upgrade for version -32 has been completed.
Upgrade is not finalized.

Please suggest

Regards
Yogesh Kumar




From: maheswara@huawei.com





To: user@hadoop.apache.org
Subject: RE: HADOOP UPGRADE ISSUE
Date: Thu, 22 Nov 2012 07:05:51 +0000








start-all.sh will not carry any arguments to pass to nodes.

Start with start-dfs.sh 

or start directly namenode with upgrade option. ./hadoop namenode -upgrade

 

Regards,

Uma



From: yogesh dhari [yogeshdhari@live.com]

Sent: Thursday, November 22, 2012 12:23 PM

To: hadoop helpforoum

Subject: HADOOP UPGRADE ISSUE






Hi All,



I am trying upgrading apache hadoop-0.20.2 to hadoop-1.0.4.

I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as were in hadoop-0.20.2.

Now I am starting dfs n mapred using



start-all.sh -upgrade



but namenode and datanode fail to run.



1) Namenode's logs shows::



ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.

java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.

.

.

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.





2) Datanode's logs shows:: 



WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx

****(  how these file permission showing warnings)*****



2012-11-22 12:05:21,157 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.



Please suggest



Thanks & Regards

Yogesh Kumar








 		 	   		  


-- 
Harsh J

 		 	   		  


-- 
Harsh J

 		 	   		  

Re: HADOOP UPGRADE ISSUE

Posted by Harsh J <ha...@cloudera.com>.
Ah alright, yes you can take down the shell instance of NN you've started
by issuing a simple Ctrl+C, and then start it with the other DNs, SNN in
the background with the regular start-dfs or start-all command. That is
safe to do.

P.s. Be sure to finalize your upgrade after adequate testing has been done.
You can do this at a later time.


On Thu, Nov 22, 2012 at 3:00 PM, yogesh dhari <yo...@live.com> wrote:

>  Hello Harsh..
>
> Thanks for your suggestion :-), I need more guidance over it.
>
> Terminal through I have run command
> hadoop namenode -upgrade (Say Terminal-1)
>
> terminal-1 got stuck at this point.
>
>
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
> ""and still cursor is blinking...""
>
> Should I do stop it using ctrl+c command.  If I do so it will kill the
> process.
>
> If I do open new terminal(say Terminal-2) and run JPS it shows
> 6374 NameNode
> 7615 Jps
>
> As you mentioned to run hadoop fs -ls /
> it shows all stored directories..( on Terminal-2 )
>
> Now. Should I kill the process over Termial-1. What should I do to make it
> complete without any loss of data..
>
> I am attaching screen shot..
>
> Please have a look
>
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
>
>
>
>
> ------------------------------
> From: harsh@cloudera.com
> Date: Thu, 22 Nov 2012 14:14:00 +0530
> Subject: Re: HADOOP UPGRADE ISSUE
> To: user@hadoop.apache.org
>
>
> If your UI is already up, then NN is already functional. The UI merely
> shows that your upgrade is done but has not been manually finalized by you
> (leaving it open for a rollback if needed).
>
> You could try a simple "hadoop fs -ls /" to see if NN is functional, run
> some other regular job based tests of yours, and then finalize the new
> format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade
> permanent (no rollback possible after this).
>
>
> On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com>wrote:
>
>  Thanks Uma,
>
> I used command
> hadoop namenode -upgrade and its started well but got stuck at one point.
>
> 12/11/22 13:06:19 INFO mortbay.log: Started
> SelectChannelConnector@localhost:50070
> 12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
>
> from this point its not showing any progress for past 30 + mins...
>
>
> and Web ui shows
>
> NameNode 'localhost:9000'  Started: Thu Nov 22 13:06:17 IST 2012 Version: 1.0.4,
> r1393290  Compiled: Wed Oct 3 05:13:58 UTC 2012 by hortonfo  Upgrades: Upgrade
> for version -32 has been completed. Upgrade is not finalized.
>
> Please suggest
>
> Regards
> Yogesh Kumar
>
>
>
>
> ------------------------------
> From: maheswara@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: HADOOP UPGRADE ISSUE
> Date: Thu, 22 Nov 2012 07:05:51 +0000
>
>
>  start-all.sh will not carry any arguments to pass to nodes.
> Start with start-dfs.sh
> or start directly namenode with upgrade option. ./hadoop namenode -upgrade
>
> Regards,
> Uma
>  ------------------------------
> *From:* yogesh dhari [yogeshdhari@live.com]
> *Sent:* Thursday, November 22, 2012 12:23 PM
> *To:* hadoop helpforoum
> *Subject:* HADOOP UPGRADE ISSUE
>
>   Hi All,
>
> I am trying upgrading apache *hadoop-0.20.2 to hadoop-1.0.4*.
> I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as
> were in hadoop-0.20.2.
> Now I am starting dfs n mapred using
>
> *start-all.sh -upgrade*
>
> but *namenode *and *datanode* fail to run.
>
> 1) *Namenode's *logs shows::
>
> ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
> .
> .
> ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
>
>
> 2)* Datanode's* logs shows::
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
> dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected:
> rwxr-xr-x, while actual: rwxrwxrwx
> ****(  how these file permission showing warnings)*****
>
> 2012-11-22 12:05:21,157 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
> dfs.data.dir are invalid.
>
> Please suggest
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
>
> --
> Harsh J
>



-- 
Harsh J

Re: HADOOP UPGRADE ISSUE

Posted by Harsh J <ha...@cloudera.com>.
Ah alright, yes you can take down the shell instance of NN you've started
by issuing a simple Ctrl+C, and then start it with the other DNs, SNN in
the background with the regular start-dfs or start-all command. That is
safe to do.

P.s. Be sure to finalize your upgrade after adequate testing has been done.
You can do this at a later time.


On Thu, Nov 22, 2012 at 3:00 PM, yogesh dhari <yo...@live.com> wrote:

>  Hello Harsh..
>
> Thanks for your suggestion :-), I need more guidance over it.
>
> Terminal through I have run command
> hadoop namenode -upgrade (Say Terminal-1)
>
> terminal-1 got stuck at this point.
>
>
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
> ""and still cursor is blinking...""
>
> Should I do stop it using ctrl+c command.  If I do so it will kill the
> process.
>
> If I do open new terminal(say Terminal-2) and run JPS it shows
> 6374 NameNode
> 7615 Jps
>
> As you mentioned to run hadoop fs -ls /
> it shows all stored directories..( on Terminal-2 )
>
> Now. Should I kill the process over Termial-1. What should I do to make it
> complete without any loss of data..
>
> I am attaching screen shot..
>
> Please have a look
>
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
>
>
>
>
> ------------------------------
> From: harsh@cloudera.com
> Date: Thu, 22 Nov 2012 14:14:00 +0530
> Subject: Re: HADOOP UPGRADE ISSUE
> To: user@hadoop.apache.org
>
>
> If your UI is already up, then NN is already functional. The UI merely
> shows that your upgrade is done but has not been manually finalized by you
> (leaving it open for a rollback if needed).
>
> You could try a simple "hadoop fs -ls /" to see if NN is functional, run
> some other regular job based tests of yours, and then finalize the new
> format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade
> permanent (no rollback possible after this).
>
>
> On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com>wrote:
>
>  Thanks Uma,
>
> I used command
> hadoop namenode -upgrade and its started well but got stuck at one point.
>
> 12/11/22 13:06:19 INFO mortbay.log: Started
> SelectChannelConnector@localhost:50070
> 12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
>
> from this point its not showing any progress for past 30 + mins...
>
>
> and Web ui shows
>
> NameNode 'localhost:9000'  Started: Thu Nov 22 13:06:17 IST 2012 Version: 1.0.4,
> r1393290  Compiled: Wed Oct 3 05:13:58 UTC 2012 by hortonfo  Upgrades: Upgrade
> for version -32 has been completed. Upgrade is not finalized.
>
> Please suggest
>
> Regards
> Yogesh Kumar
>
>
>
>
> ------------------------------
> From: maheswara@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: HADOOP UPGRADE ISSUE
> Date: Thu, 22 Nov 2012 07:05:51 +0000
>
>
>  start-all.sh will not carry any arguments to pass to nodes.
> Start with start-dfs.sh
> or start directly namenode with upgrade option. ./hadoop namenode -upgrade
>
> Regards,
> Uma
>  ------------------------------
> *From:* yogesh dhari [yogeshdhari@live.com]
> *Sent:* Thursday, November 22, 2012 12:23 PM
> *To:* hadoop helpforoum
> *Subject:* HADOOP UPGRADE ISSUE
>
>   Hi All,
>
> I am trying upgrading apache *hadoop-0.20.2 to hadoop-1.0.4*.
> I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as
> were in hadoop-0.20.2.
> Now I am starting dfs n mapred using
>
> *start-all.sh -upgrade*
>
> but *namenode *and *datanode* fail to run.
>
> 1) *Namenode's *logs shows::
>
> ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
> .
> .
> ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
>
>
> 2)* Datanode's* logs shows::
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
> dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected:
> rwxr-xr-x, while actual: rwxrwxrwx
> ****(  how these file permission showing warnings)*****
>
> 2012-11-22 12:05:21,157 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
> dfs.data.dir are invalid.
>
> Please suggest
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
>
> --
> Harsh J
>



-- 
Harsh J

Re: HADOOP UPGRADE ISSUE

Posted by Harsh J <ha...@cloudera.com>.
Ah alright, yes you can take down the shell instance of NN you've started
by issuing a simple Ctrl+C, and then start it with the other DNs, SNN in
the background with the regular start-dfs or start-all command. That is
safe to do.

P.s. Be sure to finalize your upgrade after adequate testing has been done.
You can do this at a later time.


On Thu, Nov 22, 2012 at 3:00 PM, yogesh dhari <yo...@live.com> wrote:

>  Hello Harsh..
>
> Thanks for your suggestion :-), I need more guidance over it.
>
> Terminal through I have run command
> hadoop namenode -upgrade (Say Terminal-1)
>
> terminal-1 got stuck at this point.
>
>
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
> ""and still cursor is blinking...""
>
> Should I do stop it using ctrl+c command.  If I do so it will kill the
> process.
>
> If I do open new terminal(say Terminal-2) and run JPS it shows
> 6374 NameNode
> 7615 Jps
>
> As you mentioned to run hadoop fs -ls /
> it shows all stored directories..( on Terminal-2 )
>
> Now. Should I kill the process over Termial-1. What should I do to make it
> complete without any loss of data..
>
> I am attaching screen shot..
>
> Please have a look
>
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
>
>
>
>
> ------------------------------
> From: harsh@cloudera.com
> Date: Thu, 22 Nov 2012 14:14:00 +0530
> Subject: Re: HADOOP UPGRADE ISSUE
> To: user@hadoop.apache.org
>
>
> If your UI is already up, then NN is already functional. The UI merely
> shows that your upgrade is done but has not been manually finalized by you
> (leaving it open for a rollback if needed).
>
> You could try a simple "hadoop fs -ls /" to see if NN is functional, run
> some other regular job based tests of yours, and then finalize the new
> format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade
> permanent (no rollback possible after this).
>
>
> On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com>wrote:
>
>  Thanks Uma,
>
> I used command
> hadoop namenode -upgrade and its started well but got stuck at one point.
>
> 12/11/22 13:06:19 INFO mortbay.log: Started
> SelectChannelConnector@localhost:50070
> 12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
>
> from this point its not showing any progress for past 30 + mins...
>
>
> and Web ui shows
>
> NameNode 'localhost:9000'  Started: Thu Nov 22 13:06:17 IST 2012 Version: 1.0.4,
> r1393290  Compiled: Wed Oct 3 05:13:58 UTC 2012 by hortonfo  Upgrades: Upgrade
> for version -32 has been completed. Upgrade is not finalized.
>
> Please suggest
>
> Regards
> Yogesh Kumar
>
>
>
>
> ------------------------------
> From: maheswara@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: HADOOP UPGRADE ISSUE
> Date: Thu, 22 Nov 2012 07:05:51 +0000
>
>
>  start-all.sh will not carry any arguments to pass to nodes.
> Start with start-dfs.sh
> or start directly namenode with upgrade option. ./hadoop namenode -upgrade
>
> Regards,
> Uma
>  ------------------------------
> *From:* yogesh dhari [yogeshdhari@live.com]
> *Sent:* Thursday, November 22, 2012 12:23 PM
> *To:* hadoop helpforoum
> *Subject:* HADOOP UPGRADE ISSUE
>
>   Hi All,
>
> I am trying upgrading apache *hadoop-0.20.2 to hadoop-1.0.4*.
> I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as
> were in hadoop-0.20.2.
> Now I am starting dfs n mapred using
>
> *start-all.sh -upgrade*
>
> but *namenode *and *datanode* fail to run.
>
> 1) *Namenode's *logs shows::
>
> ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
> .
> .
> ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
>
>
> 2)* Datanode's* logs shows::
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
> dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected:
> rwxr-xr-x, while actual: rwxrwxrwx
> ****(  how these file permission showing warnings)*****
>
> 2012-11-22 12:05:21,157 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
> dfs.data.dir are invalid.
>
> Please suggest
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
>
> --
> Harsh J
>



-- 
Harsh J

Re: HADOOP UPGRADE ISSUE

Posted by Harsh J <ha...@cloudera.com>.
Ah alright, yes you can take down the shell instance of NN you've started
by issuing a simple Ctrl+C, and then start it with the other DNs, SNN in
the background with the regular start-dfs or start-all command. That is
safe to do.

P.s. Be sure to finalize your upgrade after adequate testing has been done.
You can do this at a later time.


On Thu, Nov 22, 2012 at 3:00 PM, yogesh dhari <yo...@live.com> wrote:

>  Hello Harsh..
>
> Thanks for your suggestion :-), I need more guidance over it.
>
> Terminal through I have run command
> hadoop namenode -upgrade (Say Terminal-1)
>
> terminal-1 got stuck at this point.
>
>
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
> ""and still cursor is blinking...""
>
> Should I do stop it using ctrl+c command.  If I do so it will kill the
> process.
>
> If I do open new terminal(say Terminal-2) and run JPS it shows
> 6374 NameNode
> 7615 Jps
>
> As you mentioned to run hadoop fs -ls /
> it shows all stored directories..( on Terminal-2 )
>
> Now. Should I kill the process over Termial-1. What should I do to make it
> complete without any loss of data..
>
> I am attaching screen shot..
>
> Please have a look
>
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
>
>
>
>
> ------------------------------
> From: harsh@cloudera.com
> Date: Thu, 22 Nov 2012 14:14:00 +0530
> Subject: Re: HADOOP UPGRADE ISSUE
> To: user@hadoop.apache.org
>
>
> If your UI is already up, then NN is already functional. The UI merely
> shows that your upgrade is done but has not been manually finalized by you
> (leaving it open for a rollback if needed).
>
> You could try a simple "hadoop fs -ls /" to see if NN is functional, run
> some other regular job based tests of yours, and then finalize the new
> format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade
> permanent (no rollback possible after this).
>
>
> On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com>wrote:
>
>  Thanks Uma,
>
> I used command
> hadoop namenode -upgrade and its started well but got stuck at one point.
>
> 12/11/22 13:06:19 INFO mortbay.log: Started
> SelectChannelConnector@localhost:50070
> 12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
>
> from this point its not showing any progress for past 30 + mins...
>
>
> and Web ui shows
>
> NameNode 'localhost:9000'  Started: Thu Nov 22 13:06:17 IST 2012 Version: 1.0.4,
> r1393290  Compiled: Wed Oct 3 05:13:58 UTC 2012 by hortonfo  Upgrades: Upgrade
> for version -32 has been completed. Upgrade is not finalized.
>
> Please suggest
>
> Regards
> Yogesh Kumar
>
>
>
>
> ------------------------------
> From: maheswara@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: HADOOP UPGRADE ISSUE
> Date: Thu, 22 Nov 2012 07:05:51 +0000
>
>
>  start-all.sh will not carry any arguments to pass to nodes.
> Start with start-dfs.sh
> or start directly namenode with upgrade option. ./hadoop namenode -upgrade
>
> Regards,
> Uma
>  ------------------------------
> *From:* yogesh dhari [yogeshdhari@live.com]
> *Sent:* Thursday, November 22, 2012 12:23 PM
> *To:* hadoop helpforoum
> *Subject:* HADOOP UPGRADE ISSUE
>
>   Hi All,
>
> I am trying upgrading apache *hadoop-0.20.2 to hadoop-1.0.4*.
> I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as
> were in hadoop-0.20.2.
> Now I am starting dfs n mapred using
>
> *start-all.sh -upgrade*
>
> but *namenode *and *datanode* fail to run.
>
> 1) *Namenode's *logs shows::
>
> ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
> .
> .
> ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
>
>
> 2)* Datanode's* logs shows::
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
> dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected:
> rwxr-xr-x, while actual: rwxrwxrwx
> ****(  how these file permission showing warnings)*****
>
> 2012-11-22 12:05:21,157 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
> dfs.data.dir are invalid.
>
> Please suggest
>
> Thanks & Regards
> Yogesh Kumar
>
>
>
>
>
> --
> Harsh J
>



-- 
Harsh J

RE: HADOOP UPGRADE ISSUE

Posted by yogesh dhari <yo...@live.com>.
Hello Harsh..

Thanks for your suggestion :-), I need more guidance over it.

Terminal through I have run command 
hadoop namenode -upgrade (Say Terminal-1)

terminal-1 got stuck at this point.

12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
""and still cursor is blinking...""

Should I do stop it using ctrl+c command.  If I do so it will kill the process.

If I do open new terminal(say Terminal-2) and run JPS it shows
6374 NameNode
7615 Jps

As you mentioned to run hadoop fs -ls /
it shows all stored directories..( on Terminal-2 )
 
Now. Should I kill the process over Termial-1. What should I do to make it complete without any loss of data..

I am attaching screen shot..  

Please have a look

Thanks & Regards
Yogesh Kumar








From: harsh@cloudera.com
Date: Thu, 22 Nov 2012 14:14:00 +0530
Subject: Re: HADOOP UPGRADE ISSUE
To: user@hadoop.apache.org

If your UI is already up, then NN is already functional. The UI merely shows that your upgrade is done but has not been manually finalized by you (leaving it open for a rollback if needed).
You could try a simple "hadoop fs -ls /" to see if NN is functional, run some other regular job based tests of yours, and then finalize the new format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade permanent (no rollback possible after this).



On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com> wrote:






Thanks Uma,

I used command 
hadoop namenode -upgrade and its started well but got stuck at one point.

12/11/22 13:06:19 INFO mortbay.log: Started SelectChannelConnector@localhost:50070
12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070


12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting


12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting


12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting


12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting



from this point its not showing any progress for past 30 + mins...


and Web ui shows 

NameNode 'localhost:9000'


 	  
  Started:  Thu Nov 22 13:06:17 IST 2012
  Version:  1.0.4, r1393290
  Compiled:  Wed Oct  3 05:13:58 UTC 2012 by hortonfo
  Upgrades:  Upgrade for version -32 has been completed.
Upgrade is not finalized.

Please suggest

Regards
Yogesh Kumar




From: maheswara@huawei.com


To: user@hadoop.apache.org
Subject: RE: HADOOP UPGRADE ISSUE
Date: Thu, 22 Nov 2012 07:05:51 +0000








start-all.sh will not carry any arguments to pass to nodes.

Start with start-dfs.sh 

or start directly namenode with upgrade option. ./hadoop namenode -upgrade

 

Regards,

Uma



From: yogesh dhari [yogeshdhari@live.com]

Sent: Thursday, November 22, 2012 12:23 PM

To: hadoop helpforoum

Subject: HADOOP UPGRADE ISSUE






Hi All,



I am trying upgrading apache hadoop-0.20.2 to hadoop-1.0.4.

I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as were in hadoop-0.20.2.

Now I am starting dfs n mapred using



start-all.sh -upgrade



but namenode and datanode fail to run.



1) Namenode's logs shows::



ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.

java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.

.

.

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.





2) Datanode's logs shows:: 



WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx

****(  how these file permission showing warnings)*****



2012-11-22 12:05:21,157 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.



Please suggest



Thanks & Regards

Yogesh Kumar








 		 	   		  


-- 
Harsh J

 		 	   		  

RE: HADOOP UPGRADE ISSUE

Posted by yogesh dhari <yo...@live.com>.
Hello Harsh..

Thanks for your suggestion :-), I need more guidance over it.

Terminal through I have run command 
hadoop namenode -upgrade (Say Terminal-1)

terminal-1 got stuck at this point.

12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
""and still cursor is blinking...""

Should I do stop it using ctrl+c command.  If I do so it will kill the process.

If I do open new terminal(say Terminal-2) and run JPS it shows
6374 NameNode
7615 Jps

As you mentioned to run hadoop fs -ls /
it shows all stored directories..( on Terminal-2 )
 
Now. Should I kill the process over Termial-1. What should I do to make it complete without any loss of data..

I am attaching screen shot..  

Please have a look

Thanks & Regards
Yogesh Kumar








From: harsh@cloudera.com
Date: Thu, 22 Nov 2012 14:14:00 +0530
Subject: Re: HADOOP UPGRADE ISSUE
To: user@hadoop.apache.org

If your UI is already up, then NN is already functional. The UI merely shows that your upgrade is done but has not been manually finalized by you (leaving it open for a rollback if needed).
You could try a simple "hadoop fs -ls /" to see if NN is functional, run some other regular job based tests of yours, and then finalize the new format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade permanent (no rollback possible after this).



On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com> wrote:






Thanks Uma,

I used command 
hadoop namenode -upgrade and its started well but got stuck at one point.

12/11/22 13:06:19 INFO mortbay.log: Started SelectChannelConnector@localhost:50070
12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070


12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting


12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting


12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting


12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting



from this point its not showing any progress for past 30 + mins...


and Web ui shows 

NameNode 'localhost:9000'


 	  
  Started:  Thu Nov 22 13:06:17 IST 2012
  Version:  1.0.4, r1393290
  Compiled:  Wed Oct  3 05:13:58 UTC 2012 by hortonfo
  Upgrades:  Upgrade for version -32 has been completed.
Upgrade is not finalized.

Please suggest

Regards
Yogesh Kumar




From: maheswara@huawei.com


To: user@hadoop.apache.org
Subject: RE: HADOOP UPGRADE ISSUE
Date: Thu, 22 Nov 2012 07:05:51 +0000








start-all.sh will not carry any arguments to pass to nodes.

Start with start-dfs.sh 

or start directly namenode with upgrade option. ./hadoop namenode -upgrade

 

Regards,

Uma



From: yogesh dhari [yogeshdhari@live.com]

Sent: Thursday, November 22, 2012 12:23 PM

To: hadoop helpforoum

Subject: HADOOP UPGRADE ISSUE






Hi All,



I am trying upgrading apache hadoop-0.20.2 to hadoop-1.0.4.

I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as were in hadoop-0.20.2.

Now I am starting dfs n mapred using



start-all.sh -upgrade



but namenode and datanode fail to run.



1) Namenode's logs shows::



ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.

java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.

.

.

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.





2) Datanode's logs shows:: 



WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx

****(  how these file permission showing warnings)*****



2012-11-22 12:05:21,157 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.



Please suggest



Thanks & Regards

Yogesh Kumar








 		 	   		  


-- 
Harsh J

 		 	   		  

RE: HADOOP UPGRADE ISSUE

Posted by yogesh dhari <yo...@live.com>.
Hello Harsh..

Thanks for your suggestion :-), I need more guidance over it.

Terminal through I have run command 
hadoop namenode -upgrade (Say Terminal-1)

terminal-1 got stuck at this point.

12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
""and still cursor is blinking...""

Should I do stop it using ctrl+c command.  If I do so it will kill the process.

If I do open new terminal(say Terminal-2) and run JPS it shows
6374 NameNode
7615 Jps

As you mentioned to run hadoop fs -ls /
it shows all stored directories..( on Terminal-2 )
 
Now. Should I kill the process over Termial-1. What should I do to make it complete without any loss of data..

I am attaching screen shot..  

Please have a look

Thanks & Regards
Yogesh Kumar








From: harsh@cloudera.com
Date: Thu, 22 Nov 2012 14:14:00 +0530
Subject: Re: HADOOP UPGRADE ISSUE
To: user@hadoop.apache.org

If your UI is already up, then NN is already functional. The UI merely shows that your upgrade is done but has not been manually finalized by you (leaving it open for a rollback if needed).
You could try a simple "hadoop fs -ls /" to see if NN is functional, run some other regular job based tests of yours, and then finalize the new format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade permanent (no rollback possible after this).



On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com> wrote:






Thanks Uma,

I used command 
hadoop namenode -upgrade and its started well but got stuck at one point.

12/11/22 13:06:19 INFO mortbay.log: Started SelectChannelConnector@localhost:50070
12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070


12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting


12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting


12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting


12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting



from this point its not showing any progress for past 30 + mins...


and Web ui shows 

NameNode 'localhost:9000'


 	  
  Started:  Thu Nov 22 13:06:17 IST 2012
  Version:  1.0.4, r1393290
  Compiled:  Wed Oct  3 05:13:58 UTC 2012 by hortonfo
  Upgrades:  Upgrade for version -32 has been completed.
Upgrade is not finalized.

Please suggest

Regards
Yogesh Kumar




From: maheswara@huawei.com


To: user@hadoop.apache.org
Subject: RE: HADOOP UPGRADE ISSUE
Date: Thu, 22 Nov 2012 07:05:51 +0000








start-all.sh will not carry any arguments to pass to nodes.

Start with start-dfs.sh 

or start directly namenode with upgrade option. ./hadoop namenode -upgrade

 

Regards,

Uma



From: yogesh dhari [yogeshdhari@live.com]

Sent: Thursday, November 22, 2012 12:23 PM

To: hadoop helpforoum

Subject: HADOOP UPGRADE ISSUE






Hi All,



I am trying upgrading apache hadoop-0.20.2 to hadoop-1.0.4.

I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as were in hadoop-0.20.2.

Now I am starting dfs n mapred using



start-all.sh -upgrade



but namenode and datanode fail to run.



1) Namenode's logs shows::



ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.

java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.

.

.

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.





2) Datanode's logs shows:: 



WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx

****(  how these file permission showing warnings)*****



2012-11-22 12:05:21,157 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.



Please suggest



Thanks & Regards

Yogesh Kumar








 		 	   		  


-- 
Harsh J

 		 	   		  

RE: HADOOP UPGRADE ISSUE

Posted by yogesh dhari <yo...@live.com>.
Hello Harsh..

Thanks for your suggestion :-), I need more guidance over it.

Terminal through I have run command 
hadoop namenode -upgrade (Say Terminal-1)

terminal-1 got stuck at this point.

12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
""and still cursor is blinking...""

Should I do stop it using ctrl+c command.  If I do so it will kill the process.

If I do open new terminal(say Terminal-2) and run JPS it shows
6374 NameNode
7615 Jps

As you mentioned to run hadoop fs -ls /
it shows all stored directories..( on Terminal-2 )
 
Now. Should I kill the process over Termial-1. What should I do to make it complete without any loss of data..

I am attaching screen shot..  

Please have a look

Thanks & Regards
Yogesh Kumar








From: harsh@cloudera.com
Date: Thu, 22 Nov 2012 14:14:00 +0530
Subject: Re: HADOOP UPGRADE ISSUE
To: user@hadoop.apache.org

If your UI is already up, then NN is already functional. The UI merely shows that your upgrade is done but has not been manually finalized by you (leaving it open for a rollback if needed).
You could try a simple "hadoop fs -ls /" to see if NN is functional, run some other regular job based tests of yours, and then finalize the new format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade permanent (no rollback possible after this).



On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com> wrote:






Thanks Uma,

I used command 
hadoop namenode -upgrade and its started well but got stuck at one point.

12/11/22 13:06:19 INFO mortbay.log: Started SelectChannelConnector@localhost:50070
12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070


12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting


12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting


12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting


12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting



from this point its not showing any progress for past 30 + mins...


and Web ui shows 

NameNode 'localhost:9000'


 	  
  Started:  Thu Nov 22 13:06:17 IST 2012
  Version:  1.0.4, r1393290
  Compiled:  Wed Oct  3 05:13:58 UTC 2012 by hortonfo
  Upgrades:  Upgrade for version -32 has been completed.
Upgrade is not finalized.

Please suggest

Regards
Yogesh Kumar




From: maheswara@huawei.com


To: user@hadoop.apache.org
Subject: RE: HADOOP UPGRADE ISSUE
Date: Thu, 22 Nov 2012 07:05:51 +0000








start-all.sh will not carry any arguments to pass to nodes.

Start with start-dfs.sh 

or start directly namenode with upgrade option. ./hadoop namenode -upgrade

 

Regards,

Uma



From: yogesh dhari [yogeshdhari@live.com]

Sent: Thursday, November 22, 2012 12:23 PM

To: hadoop helpforoum

Subject: HADOOP UPGRADE ISSUE






Hi All,



I am trying upgrading apache hadoop-0.20.2 to hadoop-1.0.4.

I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as were in hadoop-0.20.2.

Now I am starting dfs n mapred using



start-all.sh -upgrade



but namenode and datanode fail to run.



1) Namenode's logs shows::



ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.

java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.

.

.

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.





2) Datanode's logs shows:: 



WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx

****(  how these file permission showing warnings)*****



2012-11-22 12:05:21,157 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.



Please suggest



Thanks & Regards

Yogesh Kumar








 		 	   		  


-- 
Harsh J

 		 	   		  

Re: HADOOP UPGRADE ISSUE

Posted by Harsh J <ha...@cloudera.com>.
If your UI is already up, then NN is already functional. The UI merely
shows that your upgrade is done but has not been manually finalized by you
(leaving it open for a rollback if needed).

You could try a simple "hadoop fs -ls /" to see if NN is functional, run
some other regular job based tests of yours, and then finalize the new
format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade
permanent (no rollback possible after this).


On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com> wrote:

>  Thanks Uma,
>
> I used command
> hadoop namenode -upgrade and its started well but got stuck at one point.
>
> 12/11/22 13:06:19 INFO mortbay.log: Started
> SelectChannelConnector@localhost:50070
> 12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
>
> from this point its not showing any progress for past 30 + mins...
>
>
> and Web ui shows
>
> NameNode 'localhost:9000'  Started: Thu Nov 22 13:06:17 IST 2012 Version: 1.0.4,
> r1393290  Compiled: Wed Oct 3 05:13:58 UTC 2012 by hortonfo  Upgrades: Upgrade
> for version -32 has been completed. Upgrade is not finalized.
>
> Please suggest
>
> Regards
> Yogesh Kumar
>
>
>
>
> ------------------------------
> From: maheswara@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: HADOOP UPGRADE ISSUE
> Date: Thu, 22 Nov 2012 07:05:51 +0000
>
>
>  start-all.sh will not carry any arguments to pass to nodes.
> Start with start-dfs.sh
> or start directly namenode with upgrade option. ./hadoop namenode -upgrade
>
> Regards,
> Uma
>  ------------------------------
> *From:* yogesh dhari [yogeshdhari@live.com]
> *Sent:* Thursday, November 22, 2012 12:23 PM
> *To:* hadoop helpforoum
> *Subject:* HADOOP UPGRADE ISSUE
>
>   Hi All,
>
> I am trying upgrading apache *hadoop-0.20.2 to hadoop-1.0.4*.
> I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as
> were in hadoop-0.20.2.
> Now I am starting dfs n mapred using
>
> *start-all.sh -upgrade*
>
> but *namenode *and *datanode* fail to run.
>
> 1) *Namenode's *logs shows::
>
> ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
> .
> .
> ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
>
>
> 2)* Datanode's* logs shows::
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
> dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected:
> rwxr-xr-x, while actual: rwxrwxrwx
> ****(  how these file permission showing warnings)*****
>
> 2012-11-22 12:05:21,157 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
> dfs.data.dir are invalid.
>
> Please suggest
>
> Thanks & Regards
> Yogesh Kumar
>
>
>


-- 
Harsh J

Re: HADOOP UPGRADE ISSUE

Posted by Harsh J <ha...@cloudera.com>.
If your UI is already up, then NN is already functional. The UI merely
shows that your upgrade is done but has not been manually finalized by you
(leaving it open for a rollback if needed).

You could try a simple "hadoop fs -ls /" to see if NN is functional, run
some other regular job based tests of yours, and then finalize the new
format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade
permanent (no rollback possible after this).


On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com> wrote:

>  Thanks Uma,
>
> I used command
> hadoop namenode -upgrade and its started well but got stuck at one point.
>
> 12/11/22 13:06:19 INFO mortbay.log: Started
> SelectChannelConnector@localhost:50070
> 12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
>
> from this point its not showing any progress for past 30 + mins...
>
>
> and Web ui shows
>
> NameNode 'localhost:9000'  Started: Thu Nov 22 13:06:17 IST 2012 Version: 1.0.4,
> r1393290  Compiled: Wed Oct 3 05:13:58 UTC 2012 by hortonfo  Upgrades: Upgrade
> for version -32 has been completed. Upgrade is not finalized.
>
> Please suggest
>
> Regards
> Yogesh Kumar
>
>
>
>
> ------------------------------
> From: maheswara@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: HADOOP UPGRADE ISSUE
> Date: Thu, 22 Nov 2012 07:05:51 +0000
>
>
>  start-all.sh will not carry any arguments to pass to nodes.
> Start with start-dfs.sh
> or start directly namenode with upgrade option. ./hadoop namenode -upgrade
>
> Regards,
> Uma
>  ------------------------------
> *From:* yogesh dhari [yogeshdhari@live.com]
> *Sent:* Thursday, November 22, 2012 12:23 PM
> *To:* hadoop helpforoum
> *Subject:* HADOOP UPGRADE ISSUE
>
>   Hi All,
>
> I am trying upgrading apache *hadoop-0.20.2 to hadoop-1.0.4*.
> I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as
> were in hadoop-0.20.2.
> Now I am starting dfs n mapred using
>
> *start-all.sh -upgrade*
>
> but *namenode *and *datanode* fail to run.
>
> 1) *Namenode's *logs shows::
>
> ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
> .
> .
> ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
>
>
> 2)* Datanode's* logs shows::
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
> dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected:
> rwxr-xr-x, while actual: rwxrwxrwx
> ****(  how these file permission showing warnings)*****
>
> 2012-11-22 12:05:21,157 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
> dfs.data.dir are invalid.
>
> Please suggest
>
> Thanks & Regards
> Yogesh Kumar
>
>
>


-- 
Harsh J

Re: HADOOP UPGRADE ISSUE

Posted by Harsh J <ha...@cloudera.com>.
If your UI is already up, then NN is already functional. The UI merely
shows that your upgrade is done but has not been manually finalized by you
(leaving it open for a rollback if needed).

You could try a simple "hadoop fs -ls /" to see if NN is functional, run
some other regular job based tests of yours, and then finalize the new
format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade
permanent (no rollback possible after this).


On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com> wrote:

>  Thanks Uma,
>
> I used command
> hadoop namenode -upgrade and its started well but got stuck at one point.
>
> 12/11/22 13:06:19 INFO mortbay.log: Started
> SelectChannelConnector@localhost:50070
> 12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
>
> from this point its not showing any progress for past 30 + mins...
>
>
> and Web ui shows
>
> NameNode 'localhost:9000'  Started: Thu Nov 22 13:06:17 IST 2012 Version: 1.0.4,
> r1393290  Compiled: Wed Oct 3 05:13:58 UTC 2012 by hortonfo  Upgrades: Upgrade
> for version -32 has been completed. Upgrade is not finalized.
>
> Please suggest
>
> Regards
> Yogesh Kumar
>
>
>
>
> ------------------------------
> From: maheswara@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: HADOOP UPGRADE ISSUE
> Date: Thu, 22 Nov 2012 07:05:51 +0000
>
>
>  start-all.sh will not carry any arguments to pass to nodes.
> Start with start-dfs.sh
> or start directly namenode with upgrade option. ./hadoop namenode -upgrade
>
> Regards,
> Uma
>  ------------------------------
> *From:* yogesh dhari [yogeshdhari@live.com]
> *Sent:* Thursday, November 22, 2012 12:23 PM
> *To:* hadoop helpforoum
> *Subject:* HADOOP UPGRADE ISSUE
>
>   Hi All,
>
> I am trying upgrading apache *hadoop-0.20.2 to hadoop-1.0.4*.
> I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as
> were in hadoop-0.20.2.
> Now I am starting dfs n mapred using
>
> *start-all.sh -upgrade*
>
> but *namenode *and *datanode* fail to run.
>
> 1) *Namenode's *logs shows::
>
> ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
> .
> .
> ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
>
>
> 2)* Datanode's* logs shows::
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
> dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected:
> rwxr-xr-x, while actual: rwxrwxrwx
> ****(  how these file permission showing warnings)*****
>
> 2012-11-22 12:05:21,157 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
> dfs.data.dir are invalid.
>
> Please suggest
>
> Thanks & Regards
> Yogesh Kumar
>
>
>


-- 
Harsh J

Re: HADOOP UPGRADE ISSUE

Posted by Harsh J <ha...@cloudera.com>.
If your UI is already up, then NN is already functional. The UI merely
shows that your upgrade is done but has not been manually finalized by you
(leaving it open for a rollback if needed).

You could try a simple "hadoop fs -ls /" to see if NN is functional, run
some other regular job based tests of yours, and then finalize the new
format by issuing "hadoop dfsadmin -finalizeUpgrade" to make the upgrade
permanent (no rollback possible after this).


On Thu, Nov 22, 2012 at 1:49 PM, yogesh dhari <yo...@live.com> wrote:

>  Thanks Uma,
>
> I used command
> hadoop namenode -upgrade and its started well but got stuck at one point.
>
> 12/11/22 13:06:19 INFO mortbay.log: Started
> SelectChannelConnector@localhost:50070
> 12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
> 12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting
>
> from this point its not showing any progress for past 30 + mins...
>
>
> and Web ui shows
>
> NameNode 'localhost:9000'  Started: Thu Nov 22 13:06:17 IST 2012 Version: 1.0.4,
> r1393290  Compiled: Wed Oct 3 05:13:58 UTC 2012 by hortonfo  Upgrades: Upgrade
> for version -32 has been completed. Upgrade is not finalized.
>
> Please suggest
>
> Regards
> Yogesh Kumar
>
>
>
>
> ------------------------------
> From: maheswara@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: HADOOP UPGRADE ISSUE
> Date: Thu, 22 Nov 2012 07:05:51 +0000
>
>
>  start-all.sh will not carry any arguments to pass to nodes.
> Start with start-dfs.sh
> or start directly namenode with upgrade option. ./hadoop namenode -upgrade
>
> Regards,
> Uma
>  ------------------------------
> *From:* yogesh dhari [yogeshdhari@live.com]
> *Sent:* Thursday, November 22, 2012 12:23 PM
> *To:* hadoop helpforoum
> *Subject:* HADOOP UPGRADE ISSUE
>
>   Hi All,
>
> I am trying upgrading apache *hadoop-0.20.2 to hadoop-1.0.4*.
> I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as
> were in hadoop-0.20.2.
> Now I am starting dfs n mapred using
>
> *start-all.sh -upgrade*
>
> but *namenode *and *datanode* fail to run.
>
> 1) *Namenode's *logs shows::
>
> ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
> .
> .
> ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.io.IOException:
> File system image contains an old layout version -18.
> An upgrade to version -32 is required.
> Please restart NameNode with -upgrade option.
>
>
> 2)* Datanode's* logs shows::
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
> dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected:
> rwxr-xr-x, while actual: rwxrwxrwx
> ****(  how these file permission showing warnings)*****
>
> 2012-11-22 12:05:21,157 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
> dfs.data.dir are invalid.
>
> Please suggest
>
> Thanks & Regards
> Yogesh Kumar
>
>
>


-- 
Harsh J

RE: HADOOP UPGRADE ISSUE

Posted by yogesh dhari <yo...@live.com>.
Thanks Uma,

I used command 
hadoop namenode -upgrade and its started well but got stuck at one point.

12/11/22 13:06:19 INFO mortbay.log: Started SelectChannelConnector@localhost:50070
12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070
12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting

from this point its not showing any progress for past 30 + mins...


and Web ui shows 

NameNode 'localhost:9000'


 	  
  Started:  Thu Nov 22 13:06:17 IST 2012
  Version:  1.0.4, r1393290
  Compiled:  Wed Oct  3 05:13:58 UTC 2012 by hortonfo
  Upgrades:  Upgrade for version -32 has been completed.
Upgrade is not finalized.

Please suggest

Regards
Yogesh Kumar




From: maheswara@huawei.com
To: user@hadoop.apache.org
Subject: RE: HADOOP UPGRADE ISSUE
Date: Thu, 22 Nov 2012 07:05:51 +0000








start-all.sh will not carry any arguments to pass to nodes.
Start with start-dfs.sh 
or start directly namenode with upgrade option. ./hadoop namenode -upgrade
 
Regards,
Uma


From: yogesh dhari [yogeshdhari@live.com]

Sent: Thursday, November 22, 2012 12:23 PM

To: hadoop helpforoum

Subject: HADOOP UPGRADE ISSUE






Hi All,



I am trying upgrading apache hadoop-0.20.2 to hadoop-1.0.4.

I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as were in hadoop-0.20.2.

Now I am starting dfs n mapred using



start-all.sh -upgrade



but namenode and datanode fail to run.



1) Namenode's logs shows::



ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.

java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.

.

.

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.





2) Datanode's logs shows:: 



WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx

****(  how these file permission showing warnings)*****



2012-11-22 12:05:21,157 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.



Please suggest



Thanks & Regards

Yogesh Kumar








 		 	   		  

RE: HADOOP UPGRADE ISSUE

Posted by yogesh dhari <yo...@live.com>.
Thanks Uma,

I used command 
hadoop namenode -upgrade and its started well but got stuck at one point.

12/11/22 13:06:19 INFO mortbay.log: Started SelectChannelConnector@localhost:50070
12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070
12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting

from this point its not showing any progress for past 30 + mins...


and Web ui shows 

NameNode 'localhost:9000'


 	  
  Started:  Thu Nov 22 13:06:17 IST 2012
  Version:  1.0.4, r1393290
  Compiled:  Wed Oct  3 05:13:58 UTC 2012 by hortonfo
  Upgrades:  Upgrade for version -32 has been completed.
Upgrade is not finalized.

Please suggest

Regards
Yogesh Kumar




From: maheswara@huawei.com
To: user@hadoop.apache.org
Subject: RE: HADOOP UPGRADE ISSUE
Date: Thu, 22 Nov 2012 07:05:51 +0000








start-all.sh will not carry any arguments to pass to nodes.
Start with start-dfs.sh 
or start directly namenode with upgrade option. ./hadoop namenode -upgrade
 
Regards,
Uma


From: yogesh dhari [yogeshdhari@live.com]

Sent: Thursday, November 22, 2012 12:23 PM

To: hadoop helpforoum

Subject: HADOOP UPGRADE ISSUE






Hi All,



I am trying upgrading apache hadoop-0.20.2 to hadoop-1.0.4.

I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as were in hadoop-0.20.2.

Now I am starting dfs n mapred using



start-all.sh -upgrade



but namenode and datanode fail to run.



1) Namenode's logs shows::



ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.

java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.

.

.

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.





2) Datanode's logs shows:: 



WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx

****(  how these file permission showing warnings)*****



2012-11-22 12:05:21,157 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.



Please suggest



Thanks & Regards

Yogesh Kumar








 		 	   		  

RE: HADOOP UPGRADE ISSUE

Posted by yogesh dhari <yo...@live.com>.
Thanks Uma,

I used command 
hadoop namenode -upgrade and its started well but got stuck at one point.

12/11/22 13:06:19 INFO mortbay.log: Started SelectChannelConnector@localhost:50070
12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070
12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting

from this point its not showing any progress for past 30 + mins...


and Web ui shows 

NameNode 'localhost:9000'


 	  
  Started:  Thu Nov 22 13:06:17 IST 2012
  Version:  1.0.4, r1393290
  Compiled:  Wed Oct  3 05:13:58 UTC 2012 by hortonfo
  Upgrades:  Upgrade for version -32 has been completed.
Upgrade is not finalized.

Please suggest

Regards
Yogesh Kumar




From: maheswara@huawei.com
To: user@hadoop.apache.org
Subject: RE: HADOOP UPGRADE ISSUE
Date: Thu, 22 Nov 2012 07:05:51 +0000








start-all.sh will not carry any arguments to pass to nodes.
Start with start-dfs.sh 
or start directly namenode with upgrade option. ./hadoop namenode -upgrade
 
Regards,
Uma


From: yogesh dhari [yogeshdhari@live.com]

Sent: Thursday, November 22, 2012 12:23 PM

To: hadoop helpforoum

Subject: HADOOP UPGRADE ISSUE






Hi All,



I am trying upgrading apache hadoop-0.20.2 to hadoop-1.0.4.

I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as were in hadoop-0.20.2.

Now I am starting dfs n mapred using



start-all.sh -upgrade



but namenode and datanode fail to run.



1) Namenode's logs shows::



ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.

java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.

.

.

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.





2) Datanode's logs shows:: 



WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx

****(  how these file permission showing warnings)*****



2012-11-22 12:05:21,157 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.



Please suggest



Thanks & Regards

Yogesh Kumar








 		 	   		  

RE: HADOOP UPGRADE ISSUE

Posted by yogesh dhari <yo...@live.com>.
Thanks Uma,

I used command 
hadoop namenode -upgrade and its started well but got stuck at one point.

12/11/22 13:06:19 INFO mortbay.log: Started SelectChannelConnector@localhost:50070
12/11/22 13:06:19 INFO namenode.NameNode: Web-server up at: localhost:50070
12/11/22 13:06:19 INFO ipc.Server: IPC Server Responder: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server listener on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 0 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 1 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 2 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 3 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 4 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 5 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 6 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 7 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 8 on 9000: starting
12/11/22 13:06:19 INFO ipc.Server: IPC Server handler 9 on 9000: starting

from this point its not showing any progress for past 30 + mins...


and Web ui shows 

NameNode 'localhost:9000'


 	  
  Started:  Thu Nov 22 13:06:17 IST 2012
  Version:  1.0.4, r1393290
  Compiled:  Wed Oct  3 05:13:58 UTC 2012 by hortonfo
  Upgrades:  Upgrade for version -32 has been completed.
Upgrade is not finalized.

Please suggest

Regards
Yogesh Kumar




From: maheswara@huawei.com
To: user@hadoop.apache.org
Subject: RE: HADOOP UPGRADE ISSUE
Date: Thu, 22 Nov 2012 07:05:51 +0000








start-all.sh will not carry any arguments to pass to nodes.
Start with start-dfs.sh 
or start directly namenode with upgrade option. ./hadoop namenode -upgrade
 
Regards,
Uma


From: yogesh dhari [yogeshdhari@live.com]

Sent: Thursday, November 22, 2012 12:23 PM

To: hadoop helpforoum

Subject: HADOOP UPGRADE ISSUE






Hi All,



I am trying upgrading apache hadoop-0.20.2 to hadoop-1.0.4.

I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as were in hadoop-0.20.2.

Now I am starting dfs n mapred using



start-all.sh -upgrade



but namenode and datanode fail to run.



1) Namenode's logs shows::



ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.

java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.

.

.

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: 

File system image contains an old layout version -18.

An upgrade to version -32 is required.

Please restart NameNode with -upgrade option.





2) Datanode's logs shows:: 



WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx

****(  how these file permission showing warnings)*****



2012-11-22 12:05:21,157 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.



Please suggest



Thanks & Regards

Yogesh Kumar








 		 	   		  

RE: HADOOP UPGRADE ISSUE

Posted by Uma Maheswara Rao G <ma...@huawei.com>.
start-all.sh will not carry any arguments to pass to nodes.

Start with start-dfs.sh

or start directly namenode with upgrade option. ./hadoop namenode -upgrade



Regards,

Uma

________________________________
From: yogesh dhari [yogeshdhari@live.com]
Sent: Thursday, November 22, 2012 12:23 PM
To: hadoop helpforoum
Subject: HADOOP UPGRADE ISSUE

Hi All,

I am trying upgrading apache hadoop-0.20.2 to hadoop-1.0.4.
I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as were in hadoop-0.20.2.
Now I am starting dfs n mapred using

start-all.sh -upgrade

but namenode and datanode fail to run.

1) Namenode's logs shows::

ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException:
File system image contains an old layout version -18.
An upgrade to version -32 is required.
Please restart NameNode with -upgrade option.
.
.
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
File system image contains an old layout version -18.
An upgrade to version -32 is required.
Please restart NameNode with -upgrade option.


2) Datanode's logs shows::

WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx
****(  how these file permission showing warnings)*****

2012-11-22 12:05:21,157 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.

Please suggest

Thanks & Regards
Yogesh Kumar



RE: HADOOP UPGRADE ISSUE

Posted by Uma Maheswara Rao G <ma...@huawei.com>.
start-all.sh will not carry any arguments to pass to nodes.

Start with start-dfs.sh

or start directly namenode with upgrade option. ./hadoop namenode -upgrade



Regards,

Uma

________________________________
From: yogesh dhari [yogeshdhari@live.com]
Sent: Thursday, November 22, 2012 12:23 PM
To: hadoop helpforoum
Subject: HADOOP UPGRADE ISSUE

Hi All,

I am trying upgrading apache hadoop-0.20.2 to hadoop-1.0.4.
I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as were in hadoop-0.20.2.
Now I am starting dfs n mapred using

start-all.sh -upgrade

but namenode and datanode fail to run.

1) Namenode's logs shows::

ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException:
File system image contains an old layout version -18.
An upgrade to version -32 is required.
Please restart NameNode with -upgrade option.
.
.
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
File system image contains an old layout version -18.
An upgrade to version -32 is required.
Please restart NameNode with -upgrade option.


2) Datanode's logs shows::

WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx
****(  how these file permission showing warnings)*****

2012-11-22 12:05:21,157 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.

Please suggest

Thanks & Regards
Yogesh Kumar



RE: HADOOP UPGRADE ISSUE

Posted by Uma Maheswara Rao G <ma...@huawei.com>.
start-all.sh will not carry any arguments to pass to nodes.

Start with start-dfs.sh

or start directly namenode with upgrade option. ./hadoop namenode -upgrade



Regards,

Uma

________________________________
From: yogesh dhari [yogeshdhari@live.com]
Sent: Thursday, November 22, 2012 12:23 PM
To: hadoop helpforoum
Subject: HADOOP UPGRADE ISSUE

Hi All,

I am trying upgrading apache hadoop-0.20.2 to hadoop-1.0.4.
I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as were in hadoop-0.20.2.
Now I am starting dfs n mapred using

start-all.sh -upgrade

but namenode and datanode fail to run.

1) Namenode's logs shows::

ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException:
File system image contains an old layout version -18.
An upgrade to version -32 is required.
Please restart NameNode with -upgrade option.
.
.
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
File system image contains an old layout version -18.
An upgrade to version -32 is required.
Please restart NameNode with -upgrade option.


2) Datanode's logs shows::

WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx
****(  how these file permission showing warnings)*****

2012-11-22 12:05:21,157 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.

Please suggest

Thanks & Regards
Yogesh Kumar



RE: HADOOP UPGRADE ISSUE

Posted by Uma Maheswara Rao G <ma...@huawei.com>.
start-all.sh will not carry any arguments to pass to nodes.

Start with start-dfs.sh

or start directly namenode with upgrade option. ./hadoop namenode -upgrade



Regards,

Uma

________________________________
From: yogesh dhari [yogeshdhari@live.com]
Sent: Thursday, November 22, 2012 12:23 PM
To: hadoop helpforoum
Subject: HADOOP UPGRADE ISSUE

Hi All,

I am trying upgrading apache hadoop-0.20.2 to hadoop-1.0.4.
I have give same dfs.name.dir, etc as same in hadoop-1.0.4' conf files as were in hadoop-0.20.2.
Now I am starting dfs n mapred using

start-all.sh -upgrade

but namenode and datanode fail to run.

1) Namenode's logs shows::

ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException:
File system image contains an old layout version -18.
An upgrade to version -32 is required.
Please restart NameNode with -upgrade option.
.
.
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
File system image contains an old layout version -18.
An upgrade to version -32 is required.
Please restart NameNode with -upgrade option.


2) Datanode's logs shows::

WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /opt/hadoop_newdata_dirr, expected: rwxr-xr-x, while actual: rwxrwxrwx
****(  how these file permission showing warnings)*****

2012-11-22 12:05:21,157 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.

Please suggest

Thanks & Regards
Yogesh Kumar