You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Munnavar Sk <ms...@aol.com> on 2013/03/22 13:58:22 UTC

Need Help on Hadoop cluster Setup


Hi Techies,
 
I am new to Hadoop and I am fighting with this last 20days, somehowI got very good stuff on Hadoop.
 
But, some question are roaming around me...I hope, I can getthe answers from your end...!
 
I was setup a cluster in distributed mode with 5 nodes. Ihave configured Namenode and DataNodes and all datannodes are able to logingfrom namenode without password.
Hadoop and Java installed on same location in all the Nodes.After starting the cluster, I was check every node using with "jps"command.
NameNode it was shows that all demonsrunning(NameNode,JobTracker,SecondryNameNode). 
Same process is I applied for Datanodes. But, Some nodesonly showing that TaskTracer running, only one node shows that DataNode andTaskTracker runs perfectly.
My Question is that the configuration files are required tocopy all the nodes which is located in $HADOOP_HOME/conf directory?
And why that DataNode is not running on remaining nodes?
 
Please clarify this doubts, so that I can able to moveahead... :)
 
Thank you,
M Shaik
--------------
 

Re: Need Help on Hadoop cluster Setup

Posted by Mohammad Tariq <do...@gmail.com>.
you are welcome.

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Fri, Mar 22, 2013 at 8:48 PM, MShaik <ms...@aol.com> wrote:

>
> Thank you, Tariq.
> After chang the namesapceID on datanodes, all datanodes are started.
>
>  Thank you once again...!
>
> -----Original Message-----
> From: Mohammad Tariq <do...@gmail.com>
> To: user <us...@hadoop.apache.org>
> Sent: Fri, Mar 22, 2013 8:29 pm
> Subject: Re: Need Help on Hadoop cluster Setup
>
>  sorry for the typo in the second line of the 2nd point. the path will be
> "/dfs.data.dir/current/VERSION".
>
>  Warm Regards,
> Tariq
> https://mtariq.jux.com/
>  cloudfront.blogspot.com
>
>
> On Fri, Mar 22, 2013 at 8:27 PM, Mohammad Tariq <do...@gmail.com>wrote:
>
>> have you reformatted the hdfs?if that is the case it was, i think, not
>> proper.
>> were the nodes which you attached serving some other cluster earlier?your
>> logs show that you are facing problems because of mismatch in the IDs of
>> the NN and the IDs which DNs have. to overcome this problem you can
>> follow these steps :
>>
>>  1 - Stop all teh DNs.
>> 2 - Go to the directory which is serving as your dfs.data.dir. Inside
>> this directiry
>> you'll find a subdirectory ". there will be a file named as "VERSION"  in
>> this
>> directory. in this file you can see the namespaceID(probably the second
>> line).
>>  change it to match the namespaceID which is there in
>> "dfs.name.dir/current/VERSION"
>> file.
>> 3 - restart the processes.
>>
>>  HTH
>>
>>
>>  Warm Regards,
>> Tariq
>> https://mtariq.jux.com/
>>  cloudfront.blogspot.com
>>
>>
>>   On Fri, Mar 22, 2013 at 8:04 PM, MShaik <ms...@aol.com> wrote:
>>
>>>  Hi,
>>>
>>>  DataNode is not started on all the nodes, as tasktracker is started on
>>> all the nodes.
>>>
>>>  please find the below datanode log, please let me know the solution.
>>>
>>>  2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at
>>> n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
>>> 2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0
>>> time(s).
>>> 2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1
>>> time(s).
>>> 2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2
>>> time(s).
>>> 2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3
>>> time(s).
>>> 2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4
>>> time(s).
>>> 2013-03-22 19:52:49,162 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>>> Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID
>>> = 2050588793; datanode namespaceID = 503772406
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
>>>
>>>  2013-03-22 19:52:49,168 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
>>> ************************************************************/
>>>
>>>
>>> Thank's
>>>
>>> -----Original Message-----
>>> From: Mohammad Tariq <do...@gmail.com>
>>> To: user <us...@hadoop.apache.org>
>>> Sent: Fri, Mar 22, 2013 7:07 pm
>>> Subject: Re: Need Help on Hadoop cluster Setup
>>>
>>>  Hello Munavvar,
>>>
>>>        It depends on your configuration where your DNs and TTs will
>>> run. If you have configured all your slaves to run both the processes then
>>> they should. If they are not running then there is definitely some problem.
>>> Could you please check your DN logs once and see if you find anything
>>> unusual there. And you have to copy the files across all the machines.
>>>
>>>  You can do one more thing just to cross check. Point your web browser
>>> to the HDFS web UI(master_machine:9000) to do that.
>>>
>>>  Warm Regards,
>>> Tariq
>>> https://mtariq.jux.com/
>>>  cloudfront.blogspot.com
>>>
>>>
>>> On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:
>>>
>>>>
>>>> Hi ,
>>>>
>>>> I am new to Hadoop and I am fighting with this last 20days, somehow I
>>>> got very good stuff on Hadoop.
>>>>
>>>> But, some question are roaming around me...I hope, I can get the
>>>> answers from your end...!
>>>>
>>>> I was setup a cluster in distributed mode with 5 nodes. I have
>>>> configured Namenode and DataNodes and all datannodes are able to loging
>>>> from namenode without password.
>>>> Hadoop and Java installed on same location in all the Nodes. After
>>>> starting the cluster, I was check every node using with "jps" command.
>>>> NameNode it was shows that all demons
>>>> running(NameNode,JobTracker,SecondryNameNode).
>>>> Same process is I applied for Datanodes. But, Some nodes only showing
>>>> that TaskTracer running, only one node shows that DataNode and TaskTracker
>>>> runs perfectly.
>>>> My Question is that the configuration files are required to copy all
>>>> the nodes which is located in $HADOOP_HOME/conf directory?
>>>> And why that DataNode is not running on remaining nodes?
>>>>
>>>> Please clarify this doubts, so that I can able to move ahead... :)
>>>>
>>>> Thank you,
>>>> M Shaik
>>>> --------------
>>>>
>>>
>>>
>>
>

Re: Need Help on Hadoop cluster Setup

Posted by Mohammad Tariq <do...@gmail.com>.
you are welcome.

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Fri, Mar 22, 2013 at 8:48 PM, MShaik <ms...@aol.com> wrote:

>
> Thank you, Tariq.
> After chang the namesapceID on datanodes, all datanodes are started.
>
>  Thank you once again...!
>
> -----Original Message-----
> From: Mohammad Tariq <do...@gmail.com>
> To: user <us...@hadoop.apache.org>
> Sent: Fri, Mar 22, 2013 8:29 pm
> Subject: Re: Need Help on Hadoop cluster Setup
>
>  sorry for the typo in the second line of the 2nd point. the path will be
> "/dfs.data.dir/current/VERSION".
>
>  Warm Regards,
> Tariq
> https://mtariq.jux.com/
>  cloudfront.blogspot.com
>
>
> On Fri, Mar 22, 2013 at 8:27 PM, Mohammad Tariq <do...@gmail.com>wrote:
>
>> have you reformatted the hdfs?if that is the case it was, i think, not
>> proper.
>> were the nodes which you attached serving some other cluster earlier?your
>> logs show that you are facing problems because of mismatch in the IDs of
>> the NN and the IDs which DNs have. to overcome this problem you can
>> follow these steps :
>>
>>  1 - Stop all teh DNs.
>> 2 - Go to the directory which is serving as your dfs.data.dir. Inside
>> this directiry
>> you'll find a subdirectory ". there will be a file named as "VERSION"  in
>> this
>> directory. in this file you can see the namespaceID(probably the second
>> line).
>>  change it to match the namespaceID which is there in
>> "dfs.name.dir/current/VERSION"
>> file.
>> 3 - restart the processes.
>>
>>  HTH
>>
>>
>>  Warm Regards,
>> Tariq
>> https://mtariq.jux.com/
>>  cloudfront.blogspot.com
>>
>>
>>   On Fri, Mar 22, 2013 at 8:04 PM, MShaik <ms...@aol.com> wrote:
>>
>>>  Hi,
>>>
>>>  DataNode is not started on all the nodes, as tasktracker is started on
>>> all the nodes.
>>>
>>>  please find the below datanode log, please let me know the solution.
>>>
>>>  2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at
>>> n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
>>> 2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0
>>> time(s).
>>> 2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1
>>> time(s).
>>> 2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2
>>> time(s).
>>> 2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3
>>> time(s).
>>> 2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4
>>> time(s).
>>> 2013-03-22 19:52:49,162 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>>> Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID
>>> = 2050588793; datanode namespaceID = 503772406
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
>>>
>>>  2013-03-22 19:52:49,168 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
>>> ************************************************************/
>>>
>>>
>>> Thank's
>>>
>>> -----Original Message-----
>>> From: Mohammad Tariq <do...@gmail.com>
>>> To: user <us...@hadoop.apache.org>
>>> Sent: Fri, Mar 22, 2013 7:07 pm
>>> Subject: Re: Need Help on Hadoop cluster Setup
>>>
>>>  Hello Munavvar,
>>>
>>>        It depends on your configuration where your DNs and TTs will
>>> run. If you have configured all your slaves to run both the processes then
>>> they should. If they are not running then there is definitely some problem.
>>> Could you please check your DN logs once and see if you find anything
>>> unusual there. And you have to copy the files across all the machines.
>>>
>>>  You can do one more thing just to cross check. Point your web browser
>>> to the HDFS web UI(master_machine:9000) to do that.
>>>
>>>  Warm Regards,
>>> Tariq
>>> https://mtariq.jux.com/
>>>  cloudfront.blogspot.com
>>>
>>>
>>> On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:
>>>
>>>>
>>>> Hi ,
>>>>
>>>> I am new to Hadoop and I am fighting with this last 20days, somehow I
>>>> got very good stuff on Hadoop.
>>>>
>>>> But, some question are roaming around me...I hope, I can get the
>>>> answers from your end...!
>>>>
>>>> I was setup a cluster in distributed mode with 5 nodes. I have
>>>> configured Namenode and DataNodes and all datannodes are able to loging
>>>> from namenode without password.
>>>> Hadoop and Java installed on same location in all the Nodes. After
>>>> starting the cluster, I was check every node using with "jps" command.
>>>> NameNode it was shows that all demons
>>>> running(NameNode,JobTracker,SecondryNameNode).
>>>> Same process is I applied for Datanodes. But, Some nodes only showing
>>>> that TaskTracer running, only one node shows that DataNode and TaskTracker
>>>> runs perfectly.
>>>> My Question is that the configuration files are required to copy all
>>>> the nodes which is located in $HADOOP_HOME/conf directory?
>>>> And why that DataNode is not running on remaining nodes?
>>>>
>>>> Please clarify this doubts, so that I can able to move ahead... :)
>>>>
>>>> Thank you,
>>>> M Shaik
>>>> --------------
>>>>
>>>
>>>
>>
>

Re: Need Help on Hadoop cluster Setup

Posted by Mohammad Tariq <do...@gmail.com>.
you are welcome.

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Fri, Mar 22, 2013 at 8:48 PM, MShaik <ms...@aol.com> wrote:

>
> Thank you, Tariq.
> After chang the namesapceID on datanodes, all datanodes are started.
>
>  Thank you once again...!
>
> -----Original Message-----
> From: Mohammad Tariq <do...@gmail.com>
> To: user <us...@hadoop.apache.org>
> Sent: Fri, Mar 22, 2013 8:29 pm
> Subject: Re: Need Help on Hadoop cluster Setup
>
>  sorry for the typo in the second line of the 2nd point. the path will be
> "/dfs.data.dir/current/VERSION".
>
>  Warm Regards,
> Tariq
> https://mtariq.jux.com/
>  cloudfront.blogspot.com
>
>
> On Fri, Mar 22, 2013 at 8:27 PM, Mohammad Tariq <do...@gmail.com>wrote:
>
>> have you reformatted the hdfs?if that is the case it was, i think, not
>> proper.
>> were the nodes which you attached serving some other cluster earlier?your
>> logs show that you are facing problems because of mismatch in the IDs of
>> the NN and the IDs which DNs have. to overcome this problem you can
>> follow these steps :
>>
>>  1 - Stop all teh DNs.
>> 2 - Go to the directory which is serving as your dfs.data.dir. Inside
>> this directiry
>> you'll find a subdirectory ". there will be a file named as "VERSION"  in
>> this
>> directory. in this file you can see the namespaceID(probably the second
>> line).
>>  change it to match the namespaceID which is there in
>> "dfs.name.dir/current/VERSION"
>> file.
>> 3 - restart the processes.
>>
>>  HTH
>>
>>
>>  Warm Regards,
>> Tariq
>> https://mtariq.jux.com/
>>  cloudfront.blogspot.com
>>
>>
>>   On Fri, Mar 22, 2013 at 8:04 PM, MShaik <ms...@aol.com> wrote:
>>
>>>  Hi,
>>>
>>>  DataNode is not started on all the nodes, as tasktracker is started on
>>> all the nodes.
>>>
>>>  please find the below datanode log, please let me know the solution.
>>>
>>>  2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at
>>> n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
>>> 2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0
>>> time(s).
>>> 2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1
>>> time(s).
>>> 2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2
>>> time(s).
>>> 2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3
>>> time(s).
>>> 2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4
>>> time(s).
>>> 2013-03-22 19:52:49,162 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>>> Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID
>>> = 2050588793; datanode namespaceID = 503772406
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
>>>
>>>  2013-03-22 19:52:49,168 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
>>> ************************************************************/
>>>
>>>
>>> Thank's
>>>
>>> -----Original Message-----
>>> From: Mohammad Tariq <do...@gmail.com>
>>> To: user <us...@hadoop.apache.org>
>>> Sent: Fri, Mar 22, 2013 7:07 pm
>>> Subject: Re: Need Help on Hadoop cluster Setup
>>>
>>>  Hello Munavvar,
>>>
>>>        It depends on your configuration where your DNs and TTs will
>>> run. If you have configured all your slaves to run both the processes then
>>> they should. If they are not running then there is definitely some problem.
>>> Could you please check your DN logs once and see if you find anything
>>> unusual there. And you have to copy the files across all the machines.
>>>
>>>  You can do one more thing just to cross check. Point your web browser
>>> to the HDFS web UI(master_machine:9000) to do that.
>>>
>>>  Warm Regards,
>>> Tariq
>>> https://mtariq.jux.com/
>>>  cloudfront.blogspot.com
>>>
>>>
>>> On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:
>>>
>>>>
>>>> Hi ,
>>>>
>>>> I am new to Hadoop and I am fighting with this last 20days, somehow I
>>>> got very good stuff on Hadoop.
>>>>
>>>> But, some question are roaming around me...I hope, I can get the
>>>> answers from your end...!
>>>>
>>>> I was setup a cluster in distributed mode with 5 nodes. I have
>>>> configured Namenode and DataNodes and all datannodes are able to loging
>>>> from namenode without password.
>>>> Hadoop and Java installed on same location in all the Nodes. After
>>>> starting the cluster, I was check every node using with "jps" command.
>>>> NameNode it was shows that all demons
>>>> running(NameNode,JobTracker,SecondryNameNode).
>>>> Same process is I applied for Datanodes. But, Some nodes only showing
>>>> that TaskTracer running, only one node shows that DataNode and TaskTracker
>>>> runs perfectly.
>>>> My Question is that the configuration files are required to copy all
>>>> the nodes which is located in $HADOOP_HOME/conf directory?
>>>> And why that DataNode is not running on remaining nodes?
>>>>
>>>> Please clarify this doubts, so that I can able to move ahead... :)
>>>>
>>>> Thank you,
>>>> M Shaik
>>>> --------------
>>>>
>>>
>>>
>>
>

Re: Need Help on Hadoop cluster Setup

Posted by Mohammad Tariq <do...@gmail.com>.
you are welcome.

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Fri, Mar 22, 2013 at 8:48 PM, MShaik <ms...@aol.com> wrote:

>
> Thank you, Tariq.
> After chang the namesapceID on datanodes, all datanodes are started.
>
>  Thank you once again...!
>
> -----Original Message-----
> From: Mohammad Tariq <do...@gmail.com>
> To: user <us...@hadoop.apache.org>
> Sent: Fri, Mar 22, 2013 8:29 pm
> Subject: Re: Need Help on Hadoop cluster Setup
>
>  sorry for the typo in the second line of the 2nd point. the path will be
> "/dfs.data.dir/current/VERSION".
>
>  Warm Regards,
> Tariq
> https://mtariq.jux.com/
>  cloudfront.blogspot.com
>
>
> On Fri, Mar 22, 2013 at 8:27 PM, Mohammad Tariq <do...@gmail.com>wrote:
>
>> have you reformatted the hdfs?if that is the case it was, i think, not
>> proper.
>> were the nodes which you attached serving some other cluster earlier?your
>> logs show that you are facing problems because of mismatch in the IDs of
>> the NN and the IDs which DNs have. to overcome this problem you can
>> follow these steps :
>>
>>  1 - Stop all teh DNs.
>> 2 - Go to the directory which is serving as your dfs.data.dir. Inside
>> this directiry
>> you'll find a subdirectory ". there will be a file named as "VERSION"  in
>> this
>> directory. in this file you can see the namespaceID(probably the second
>> line).
>>  change it to match the namespaceID which is there in
>> "dfs.name.dir/current/VERSION"
>> file.
>> 3 - restart the processes.
>>
>>  HTH
>>
>>
>>  Warm Regards,
>> Tariq
>> https://mtariq.jux.com/
>>  cloudfront.blogspot.com
>>
>>
>>   On Fri, Mar 22, 2013 at 8:04 PM, MShaik <ms...@aol.com> wrote:
>>
>>>  Hi,
>>>
>>>  DataNode is not started on all the nodes, as tasktracker is started on
>>> all the nodes.
>>>
>>>  please find the below datanode log, please let me know the solution.
>>>
>>>  2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at
>>> n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
>>> 2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0
>>> time(s).
>>> 2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1
>>> time(s).
>>> 2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2
>>> time(s).
>>> 2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3
>>> time(s).
>>> 2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4
>>> time(s).
>>> 2013-03-22 19:52:49,162 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>>> Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID
>>> = 2050588793; datanode namespaceID = 503772406
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
>>>  at
>>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
>>>
>>>  2013-03-22 19:52:49,168 INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
>>> ************************************************************/
>>>
>>>
>>> Thank's
>>>
>>> -----Original Message-----
>>> From: Mohammad Tariq <do...@gmail.com>
>>> To: user <us...@hadoop.apache.org>
>>> Sent: Fri, Mar 22, 2013 7:07 pm
>>> Subject: Re: Need Help on Hadoop cluster Setup
>>>
>>>  Hello Munavvar,
>>>
>>>        It depends on your configuration where your DNs and TTs will
>>> run. If you have configured all your slaves to run both the processes then
>>> they should. If they are not running then there is definitely some problem.
>>> Could you please check your DN logs once and see if you find anything
>>> unusual there. And you have to copy the files across all the machines.
>>>
>>>  You can do one more thing just to cross check. Point your web browser
>>> to the HDFS web UI(master_machine:9000) to do that.
>>>
>>>  Warm Regards,
>>> Tariq
>>> https://mtariq.jux.com/
>>>  cloudfront.blogspot.com
>>>
>>>
>>> On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:
>>>
>>>>
>>>> Hi ,
>>>>
>>>> I am new to Hadoop and I am fighting with this last 20days, somehow I
>>>> got very good stuff on Hadoop.
>>>>
>>>> But, some question are roaming around me...I hope, I can get the
>>>> answers from your end...!
>>>>
>>>> I was setup a cluster in distributed mode with 5 nodes. I have
>>>> configured Namenode and DataNodes and all datannodes are able to loging
>>>> from namenode without password.
>>>> Hadoop and Java installed on same location in all the Nodes. After
>>>> starting the cluster, I was check every node using with "jps" command.
>>>> NameNode it was shows that all demons
>>>> running(NameNode,JobTracker,SecondryNameNode).
>>>> Same process is I applied for Datanodes. But, Some nodes only showing
>>>> that TaskTracer running, only one node shows that DataNode and TaskTracker
>>>> runs perfectly.
>>>> My Question is that the configuration files are required to copy all
>>>> the nodes which is located in $HADOOP_HOME/conf directory?
>>>> And why that DataNode is not running on remaining nodes?
>>>>
>>>> Please clarify this doubts, so that I can able to move ahead... :)
>>>>
>>>> Thank you,
>>>> M Shaik
>>>> --------------
>>>>
>>>
>>>
>>
>

Re: Need Help on Hadoop cluster Setup

Posted by MShaik <ms...@aol.com>.
Thank you, Tariq.
After chang the namesapceID on datanodes, all datanodes are started.


Thank you once again...!


-----Original Message-----
From: Mohammad Tariq <do...@gmail.com>
To: user <us...@hadoop.apache.org>
Sent: Fri, Mar 22, 2013 8:29 pm
Subject: Re: Need Help on Hadoop cluster Setup


sorry for the typo in the second line of the 2nd point. the path will be "/dfs.data.dir/current/VERSION".


Warm Regards,
Tariq
https://mtariq.jux.com/

cloudfront.blogspot.com





On Fri, Mar 22, 2013 at 8:27 PM, Mohammad Tariq <do...@gmail.com> wrote:

have you reformatted the hdfs?if that is the case it was, i think, not proper.
were the nodes which you attached serving some other cluster earlier?your
logs show that you are facing problems because of mismatch in the IDs of
the NN and the IDs which DNs have. to overcome this problem you can 
follow these steps :


1 - Stop all teh DNs.
2 - Go to the directory which is serving as your dfs.data.dir. Inside this directiry
you'll find a subdirectory ". there will be a file named as "VERSION"  in this 
directory. in this file you can see the namespaceID(probably the second line).
change it to match the namespaceID which is there in "dfs.name.dir/current/VERSION"
file.
3 - restart the processes.


HTH





Warm Regards,
Tariq
https://mtariq.jux.com/

cloudfront.blogspot.com






On Fri, Mar 22, 2013 at 8:04 PM, MShaik <ms...@aol.com> wrote:

Hi,


DataNode is not started on all the nodes, as tasktracker is started on all the nodes.


please find the below datanode log, please let me know the solution.


2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0 time(s).
2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1 time(s).
2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2 time(s).
2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3 time(s).
2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4 time(s).
2013-03-22 19:52:49,162 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID = 2050588793; datanode namespaceID = 503772406
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)


2013-03-22 19:52:49,168 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
************************************************************/


Thank's



-----Original Message-----
From: Mohammad Tariq <do...@gmail.com>
To: user <us...@hadoop.apache.org>
Sent: Fri, Mar 22, 2013 7:07 pm
Subject: Re: Need Help on Hadoop cluster Setup


Hello Munavvar,


      It depends on your configuration where your DNs and TTs will run. If you have configured all your slaves to run both the processes then they should. If they are not running then there is definitely some problem. Could you please check your DN logs once and see if you find anything unusual there. And you have to copy the files across all the machines.


You can do one more thing just to cross check. Point your web browser to the HDFS web UI(master_machine:9000) to do that.


Warm Regards,
Tariq
https://mtariq.jux.com/

cloudfront.blogspot.com





On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:


Hi ,



 
I am new to Hadoop and I am fighting with this last 20days, somehowI got very good stuff on Hadoop.
 
But, some question are roaming around me...I hope, I can getthe answers from your end...!
 
I was setup a cluster in distributed mode with 5 nodes. Ihave configured Namenode and DataNodes and all datannodes are able to logingfrom namenode without password.
Hadoop and Java installed on same location in all the Nodes.After starting the cluster, I was check every node using with "jps"command.
NameNode it was shows that all demonsrunning(NameNode,JobTracker,SecondryNameNode). 
Same process is I applied for Datanodes. But, Some nodesonly showing that TaskTracer running, only one node shows that DataNode andTaskTracker runs perfectly.
My Question is that the configuration files are required tocopy all the nodes which is located in $HADOOP_HOME/conf directory?
And why that DataNode is not running on remaining nodes?
 
Please clarify this doubts, so that I can able to moveahead... :)
 
Thank you,
M Shaik
--------------
 
 




 








 

Re: Need Help on Hadoop cluster Setup

Posted by MShaik <ms...@aol.com>.
Thank you, Tariq.
After chang the namesapceID on datanodes, all datanodes are started.


Thank you once again...!


-----Original Message-----
From: Mohammad Tariq <do...@gmail.com>
To: user <us...@hadoop.apache.org>
Sent: Fri, Mar 22, 2013 8:29 pm
Subject: Re: Need Help on Hadoop cluster Setup


sorry for the typo in the second line of the 2nd point. the path will be "/dfs.data.dir/current/VERSION".


Warm Regards,
Tariq
https://mtariq.jux.com/

cloudfront.blogspot.com





On Fri, Mar 22, 2013 at 8:27 PM, Mohammad Tariq <do...@gmail.com> wrote:

have you reformatted the hdfs?if that is the case it was, i think, not proper.
were the nodes which you attached serving some other cluster earlier?your
logs show that you are facing problems because of mismatch in the IDs of
the NN and the IDs which DNs have. to overcome this problem you can 
follow these steps :


1 - Stop all teh DNs.
2 - Go to the directory which is serving as your dfs.data.dir. Inside this directiry
you'll find a subdirectory ". there will be a file named as "VERSION"  in this 
directory. in this file you can see the namespaceID(probably the second line).
change it to match the namespaceID which is there in "dfs.name.dir/current/VERSION"
file.
3 - restart the processes.


HTH





Warm Regards,
Tariq
https://mtariq.jux.com/

cloudfront.blogspot.com






On Fri, Mar 22, 2013 at 8:04 PM, MShaik <ms...@aol.com> wrote:

Hi,


DataNode is not started on all the nodes, as tasktracker is started on all the nodes.


please find the below datanode log, please let me know the solution.


2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0 time(s).
2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1 time(s).
2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2 time(s).
2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3 time(s).
2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4 time(s).
2013-03-22 19:52:49,162 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID = 2050588793; datanode namespaceID = 503772406
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)


2013-03-22 19:52:49,168 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
************************************************************/


Thank's



-----Original Message-----
From: Mohammad Tariq <do...@gmail.com>
To: user <us...@hadoop.apache.org>
Sent: Fri, Mar 22, 2013 7:07 pm
Subject: Re: Need Help on Hadoop cluster Setup


Hello Munavvar,


      It depends on your configuration where your DNs and TTs will run. If you have configured all your slaves to run both the processes then they should. If they are not running then there is definitely some problem. Could you please check your DN logs once and see if you find anything unusual there. And you have to copy the files across all the machines.


You can do one more thing just to cross check. Point your web browser to the HDFS web UI(master_machine:9000) to do that.


Warm Regards,
Tariq
https://mtariq.jux.com/

cloudfront.blogspot.com





On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:


Hi ,



 
I am new to Hadoop and I am fighting with this last 20days, somehowI got very good stuff on Hadoop.
 
But, some question are roaming around me...I hope, I can getthe answers from your end...!
 
I was setup a cluster in distributed mode with 5 nodes. Ihave configured Namenode and DataNodes and all datannodes are able to logingfrom namenode without password.
Hadoop and Java installed on same location in all the Nodes.After starting the cluster, I was check every node using with "jps"command.
NameNode it was shows that all demonsrunning(NameNode,JobTracker,SecondryNameNode). 
Same process is I applied for Datanodes. But, Some nodesonly showing that TaskTracer running, only one node shows that DataNode andTaskTracker runs perfectly.
My Question is that the configuration files are required tocopy all the nodes which is located in $HADOOP_HOME/conf directory?
And why that DataNode is not running on remaining nodes?
 
Please clarify this doubts, so that I can able to moveahead... :)
 
Thank you,
M Shaik
--------------
 
 




 








 

Re: Need Help on Hadoop cluster Setup

Posted by MShaik <ms...@aol.com>.
Thank you, Tariq.
After chang the namesapceID on datanodes, all datanodes are started.


Thank you once again...!


-----Original Message-----
From: Mohammad Tariq <do...@gmail.com>
To: user <us...@hadoop.apache.org>
Sent: Fri, Mar 22, 2013 8:29 pm
Subject: Re: Need Help on Hadoop cluster Setup


sorry for the typo in the second line of the 2nd point. the path will be "/dfs.data.dir/current/VERSION".


Warm Regards,
Tariq
https://mtariq.jux.com/

cloudfront.blogspot.com





On Fri, Mar 22, 2013 at 8:27 PM, Mohammad Tariq <do...@gmail.com> wrote:

have you reformatted the hdfs?if that is the case it was, i think, not proper.
were the nodes which you attached serving some other cluster earlier?your
logs show that you are facing problems because of mismatch in the IDs of
the NN and the IDs which DNs have. to overcome this problem you can 
follow these steps :


1 - Stop all teh DNs.
2 - Go to the directory which is serving as your dfs.data.dir. Inside this directiry
you'll find a subdirectory ". there will be a file named as "VERSION"  in this 
directory. in this file you can see the namespaceID(probably the second line).
change it to match the namespaceID which is there in "dfs.name.dir/current/VERSION"
file.
3 - restart the processes.


HTH





Warm Regards,
Tariq
https://mtariq.jux.com/

cloudfront.blogspot.com






On Fri, Mar 22, 2013 at 8:04 PM, MShaik <ms...@aol.com> wrote:

Hi,


DataNode is not started on all the nodes, as tasktracker is started on all the nodes.


please find the below datanode log, please let me know the solution.


2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0 time(s).
2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1 time(s).
2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2 time(s).
2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3 time(s).
2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4 time(s).
2013-03-22 19:52:49,162 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID = 2050588793; datanode namespaceID = 503772406
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)


2013-03-22 19:52:49,168 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
************************************************************/


Thank's



-----Original Message-----
From: Mohammad Tariq <do...@gmail.com>
To: user <us...@hadoop.apache.org>
Sent: Fri, Mar 22, 2013 7:07 pm
Subject: Re: Need Help on Hadoop cluster Setup


Hello Munavvar,


      It depends on your configuration where your DNs and TTs will run. If you have configured all your slaves to run both the processes then they should. If they are not running then there is definitely some problem. Could you please check your DN logs once and see if you find anything unusual there. And you have to copy the files across all the machines.


You can do one more thing just to cross check. Point your web browser to the HDFS web UI(master_machine:9000) to do that.


Warm Regards,
Tariq
https://mtariq.jux.com/

cloudfront.blogspot.com





On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:


Hi ,



 
I am new to Hadoop and I am fighting with this last 20days, somehowI got very good stuff on Hadoop.
 
But, some question are roaming around me...I hope, I can getthe answers from your end...!
 
I was setup a cluster in distributed mode with 5 nodes. Ihave configured Namenode and DataNodes and all datannodes are able to logingfrom namenode without password.
Hadoop and Java installed on same location in all the Nodes.After starting the cluster, I was check every node using with "jps"command.
NameNode it was shows that all demonsrunning(NameNode,JobTracker,SecondryNameNode). 
Same process is I applied for Datanodes. But, Some nodesonly showing that TaskTracer running, only one node shows that DataNode andTaskTracker runs perfectly.
My Question is that the configuration files are required tocopy all the nodes which is located in $HADOOP_HOME/conf directory?
And why that DataNode is not running on remaining nodes?
 
Please clarify this doubts, so that I can able to moveahead... :)
 
Thank you,
M Shaik
--------------
 
 




 








 

Re: Need Help on Hadoop cluster Setup

Posted by MShaik <ms...@aol.com>.
Thank you, Tariq.
After chang the namesapceID on datanodes, all datanodes are started.


Thank you once again...!


-----Original Message-----
From: Mohammad Tariq <do...@gmail.com>
To: user <us...@hadoop.apache.org>
Sent: Fri, Mar 22, 2013 8:29 pm
Subject: Re: Need Help on Hadoop cluster Setup


sorry for the typo in the second line of the 2nd point. the path will be "/dfs.data.dir/current/VERSION".


Warm Regards,
Tariq
https://mtariq.jux.com/

cloudfront.blogspot.com





On Fri, Mar 22, 2013 at 8:27 PM, Mohammad Tariq <do...@gmail.com> wrote:

have you reformatted the hdfs?if that is the case it was, i think, not proper.
were the nodes which you attached serving some other cluster earlier?your
logs show that you are facing problems because of mismatch in the IDs of
the NN and the IDs which DNs have. to overcome this problem you can 
follow these steps :


1 - Stop all teh DNs.
2 - Go to the directory which is serving as your dfs.data.dir. Inside this directiry
you'll find a subdirectory ". there will be a file named as "VERSION"  in this 
directory. in this file you can see the namespaceID(probably the second line).
change it to match the namespaceID which is there in "dfs.name.dir/current/VERSION"
file.
3 - restart the processes.


HTH





Warm Regards,
Tariq
https://mtariq.jux.com/

cloudfront.blogspot.com






On Fri, Mar 22, 2013 at 8:04 PM, MShaik <ms...@aol.com> wrote:

Hi,


DataNode is not started on all the nodes, as tasktracker is started on all the nodes.


please find the below datanode log, please let me know the solution.


2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0 time(s).
2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1 time(s).
2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2 time(s).
2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3 time(s).
2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4 time(s).
2013-03-22 19:52:49,162 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID = 2050588793; datanode namespaceID = 503772406
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)


2013-03-22 19:52:49,168 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
************************************************************/


Thank's



-----Original Message-----
From: Mohammad Tariq <do...@gmail.com>
To: user <us...@hadoop.apache.org>
Sent: Fri, Mar 22, 2013 7:07 pm
Subject: Re: Need Help on Hadoop cluster Setup


Hello Munavvar,


      It depends on your configuration where your DNs and TTs will run. If you have configured all your slaves to run both the processes then they should. If they are not running then there is definitely some problem. Could you please check your DN logs once and see if you find anything unusual there. And you have to copy the files across all the machines.


You can do one more thing just to cross check. Point your web browser to the HDFS web UI(master_machine:9000) to do that.


Warm Regards,
Tariq
https://mtariq.jux.com/

cloudfront.blogspot.com





On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:


Hi ,



 
I am new to Hadoop and I am fighting with this last 20days, somehowI got very good stuff on Hadoop.
 
But, some question are roaming around me...I hope, I can getthe answers from your end...!
 
I was setup a cluster in distributed mode with 5 nodes. Ihave configured Namenode and DataNodes and all datannodes are able to logingfrom namenode without password.
Hadoop and Java installed on same location in all the Nodes.After starting the cluster, I was check every node using with "jps"command.
NameNode it was shows that all demonsrunning(NameNode,JobTracker,SecondryNameNode). 
Same process is I applied for Datanodes. But, Some nodesonly showing that TaskTracer running, only one node shows that DataNode andTaskTracker runs perfectly.
My Question is that the configuration files are required tocopy all the nodes which is located in $HADOOP_HOME/conf directory?
And why that DataNode is not running on remaining nodes?
 
Please clarify this doubts, so that I can able to moveahead... :)
 
Thank you,
M Shaik
--------------
 
 




 








 

Re: Need Help on Hadoop cluster Setup

Posted by Mohammad Tariq <do...@gmail.com>.
sorry for the typo in the second line of the 2nd point. the path will be
"/dfs.data.dir/current/VERSION".

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Fri, Mar 22, 2013 at 8:27 PM, Mohammad Tariq <do...@gmail.com> wrote:

> have you reformatted the hdfs?if that is the case it was, i think, not
> proper.
> were the nodes which you attached serving some other cluster earlier?your
> logs show that you are facing problems because of mismatch in the IDs of
> the NN and the IDs which DNs have. to overcome this problem you can
> follow these steps :
>
> 1 - Stop all teh DNs.
> 2 - Go to the directory which is serving as your dfs.data.dir. Inside this
> directiry
> you'll find a subdirectory ". there will be a file named as "VERSION"  in
> this
> directory. in this file you can see the namespaceID(probably the second
> line).
> change it to match the namespaceID which is there in
> "dfs.name.dir/current/VERSION"
> file.
> 3 - restart the processes.
>
> HTH
>
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Fri, Mar 22, 2013 at 8:04 PM, MShaik <ms...@aol.com> wrote:
>
>>  Hi,
>>
>>  DataNode is not started on all the nodes, as tasktracker is started on
>> all the nodes.
>>
>>  please find the below datanode log, please let me know the solution.
>>
>>  2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at
>> n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
>> 2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0
>> time(s).
>> 2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1
>> time(s).
>> 2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2
>> time(s).
>> 2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3
>> time(s).
>> 2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4
>> time(s).
>> 2013-03-22 19:52:49,162 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>> Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID
>> = 2050588793; datanode namespaceID = 503772406
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
>>
>>  2013-03-22 19:52:49,168 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
>> ************************************************************/
>>
>>
>> Thank's
>>
>> -----Original Message-----
>> From: Mohammad Tariq <do...@gmail.com>
>> To: user <us...@hadoop.apache.org>
>> Sent: Fri, Mar 22, 2013 7:07 pm
>> Subject: Re: Need Help on Hadoop cluster Setup
>>
>>  Hello Munavvar,
>>
>>        It depends on your configuration where your DNs and TTs will run.
>> If you have configured all your slaves to run both the processes then they
>> should. If they are not running then there is definitely some problem.
>> Could you please check your DN logs once and see if you find anything
>> unusual there. And you have to copy the files across all the machines.
>>
>>  You can do one more thing just to cross check. Point your web browser
>> to the HDFS web UI(master_machine:9000) to do that.
>>
>>  Warm Regards,
>> Tariq
>> https://mtariq.jux.com/
>>  cloudfront.blogspot.com
>>
>>
>> On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:
>>
>>>
>>> Hi ,
>>>
>>> I am new to Hadoop and I am fighting with this last 20days, somehow I
>>> got very good stuff on Hadoop.
>>>
>>> But, some question are roaming around me...I hope, I can get the answers
>>> from your end...!
>>>
>>> I was setup a cluster in distributed mode with 5 nodes. I have
>>> configured Namenode and DataNodes and all datannodes are able to loging
>>> from namenode without password.
>>> Hadoop and Java installed on same location in all the Nodes. After
>>> starting the cluster, I was check every node using with "jps" command.
>>> NameNode it was shows that all demons
>>> running(NameNode,JobTracker,SecondryNameNode).
>>> Same process is I applied for Datanodes. But, Some nodes only showing
>>> that TaskTracer running, only one node shows that DataNode and TaskTracker
>>> runs perfectly.
>>> My Question is that the configuration files are required to copy all the
>>> nodes which is located in $HADOOP_HOME/conf directory?
>>> And why that DataNode is not running on remaining nodes?
>>>
>>> Please clarify this doubts, so that I can able to move ahead... :)
>>>
>>> Thank you,
>>> M Shaik
>>> --------------
>>>
>>
>>
>

Re: Need Help on Hadoop cluster Setup

Posted by Mohammad Tariq <do...@gmail.com>.
sorry for the typo in the second line of the 2nd point. the path will be
"/dfs.data.dir/current/VERSION".

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Fri, Mar 22, 2013 at 8:27 PM, Mohammad Tariq <do...@gmail.com> wrote:

> have you reformatted the hdfs?if that is the case it was, i think, not
> proper.
> were the nodes which you attached serving some other cluster earlier?your
> logs show that you are facing problems because of mismatch in the IDs of
> the NN and the IDs which DNs have. to overcome this problem you can
> follow these steps :
>
> 1 - Stop all teh DNs.
> 2 - Go to the directory which is serving as your dfs.data.dir. Inside this
> directiry
> you'll find a subdirectory ". there will be a file named as "VERSION"  in
> this
> directory. in this file you can see the namespaceID(probably the second
> line).
> change it to match the namespaceID which is there in
> "dfs.name.dir/current/VERSION"
> file.
> 3 - restart the processes.
>
> HTH
>
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Fri, Mar 22, 2013 at 8:04 PM, MShaik <ms...@aol.com> wrote:
>
>>  Hi,
>>
>>  DataNode is not started on all the nodes, as tasktracker is started on
>> all the nodes.
>>
>>  please find the below datanode log, please let me know the solution.
>>
>>  2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at
>> n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
>> 2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0
>> time(s).
>> 2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1
>> time(s).
>> 2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2
>> time(s).
>> 2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3
>> time(s).
>> 2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4
>> time(s).
>> 2013-03-22 19:52:49,162 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>> Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID
>> = 2050588793; datanode namespaceID = 503772406
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
>>
>>  2013-03-22 19:52:49,168 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
>> ************************************************************/
>>
>>
>> Thank's
>>
>> -----Original Message-----
>> From: Mohammad Tariq <do...@gmail.com>
>> To: user <us...@hadoop.apache.org>
>> Sent: Fri, Mar 22, 2013 7:07 pm
>> Subject: Re: Need Help on Hadoop cluster Setup
>>
>>  Hello Munavvar,
>>
>>        It depends on your configuration where your DNs and TTs will run.
>> If you have configured all your slaves to run both the processes then they
>> should. If they are not running then there is definitely some problem.
>> Could you please check your DN logs once and see if you find anything
>> unusual there. And you have to copy the files across all the machines.
>>
>>  You can do one more thing just to cross check. Point your web browser
>> to the HDFS web UI(master_machine:9000) to do that.
>>
>>  Warm Regards,
>> Tariq
>> https://mtariq.jux.com/
>>  cloudfront.blogspot.com
>>
>>
>> On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:
>>
>>>
>>> Hi ,
>>>
>>> I am new to Hadoop and I am fighting with this last 20days, somehow I
>>> got very good stuff on Hadoop.
>>>
>>> But, some question are roaming around me...I hope, I can get the answers
>>> from your end...!
>>>
>>> I was setup a cluster in distributed mode with 5 nodes. I have
>>> configured Namenode and DataNodes and all datannodes are able to loging
>>> from namenode without password.
>>> Hadoop and Java installed on same location in all the Nodes. After
>>> starting the cluster, I was check every node using with "jps" command.
>>> NameNode it was shows that all demons
>>> running(NameNode,JobTracker,SecondryNameNode).
>>> Same process is I applied for Datanodes. But, Some nodes only showing
>>> that TaskTracer running, only one node shows that DataNode and TaskTracker
>>> runs perfectly.
>>> My Question is that the configuration files are required to copy all the
>>> nodes which is located in $HADOOP_HOME/conf directory?
>>> And why that DataNode is not running on remaining nodes?
>>>
>>> Please clarify this doubts, so that I can able to move ahead... :)
>>>
>>> Thank you,
>>> M Shaik
>>> --------------
>>>
>>
>>
>

Re: Need Help on Hadoop cluster Setup

Posted by Mohammad Tariq <do...@gmail.com>.
sorry for the typo in the second line of the 2nd point. the path will be
"/dfs.data.dir/current/VERSION".

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Fri, Mar 22, 2013 at 8:27 PM, Mohammad Tariq <do...@gmail.com> wrote:

> have you reformatted the hdfs?if that is the case it was, i think, not
> proper.
> were the nodes which you attached serving some other cluster earlier?your
> logs show that you are facing problems because of mismatch in the IDs of
> the NN and the IDs which DNs have. to overcome this problem you can
> follow these steps :
>
> 1 - Stop all teh DNs.
> 2 - Go to the directory which is serving as your dfs.data.dir. Inside this
> directiry
> you'll find a subdirectory ". there will be a file named as "VERSION"  in
> this
> directory. in this file you can see the namespaceID(probably the second
> line).
> change it to match the namespaceID which is there in
> "dfs.name.dir/current/VERSION"
> file.
> 3 - restart the processes.
>
> HTH
>
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Fri, Mar 22, 2013 at 8:04 PM, MShaik <ms...@aol.com> wrote:
>
>>  Hi,
>>
>>  DataNode is not started on all the nodes, as tasktracker is started on
>> all the nodes.
>>
>>  please find the below datanode log, please let me know the solution.
>>
>>  2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at
>> n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
>> 2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0
>> time(s).
>> 2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1
>> time(s).
>> 2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2
>> time(s).
>> 2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3
>> time(s).
>> 2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4
>> time(s).
>> 2013-03-22 19:52:49,162 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>> Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID
>> = 2050588793; datanode namespaceID = 503772406
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
>>
>>  2013-03-22 19:52:49,168 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
>> ************************************************************/
>>
>>
>> Thank's
>>
>> -----Original Message-----
>> From: Mohammad Tariq <do...@gmail.com>
>> To: user <us...@hadoop.apache.org>
>> Sent: Fri, Mar 22, 2013 7:07 pm
>> Subject: Re: Need Help on Hadoop cluster Setup
>>
>>  Hello Munavvar,
>>
>>        It depends on your configuration where your DNs and TTs will run.
>> If you have configured all your slaves to run both the processes then they
>> should. If they are not running then there is definitely some problem.
>> Could you please check your DN logs once and see if you find anything
>> unusual there. And you have to copy the files across all the machines.
>>
>>  You can do one more thing just to cross check. Point your web browser
>> to the HDFS web UI(master_machine:9000) to do that.
>>
>>  Warm Regards,
>> Tariq
>> https://mtariq.jux.com/
>>  cloudfront.blogspot.com
>>
>>
>> On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:
>>
>>>
>>> Hi ,
>>>
>>> I am new to Hadoop and I am fighting with this last 20days, somehow I
>>> got very good stuff on Hadoop.
>>>
>>> But, some question are roaming around me...I hope, I can get the answers
>>> from your end...!
>>>
>>> I was setup a cluster in distributed mode with 5 nodes. I have
>>> configured Namenode and DataNodes and all datannodes are able to loging
>>> from namenode without password.
>>> Hadoop and Java installed on same location in all the Nodes. After
>>> starting the cluster, I was check every node using with "jps" command.
>>> NameNode it was shows that all demons
>>> running(NameNode,JobTracker,SecondryNameNode).
>>> Same process is I applied for Datanodes. But, Some nodes only showing
>>> that TaskTracer running, only one node shows that DataNode and TaskTracker
>>> runs perfectly.
>>> My Question is that the configuration files are required to copy all the
>>> nodes which is located in $HADOOP_HOME/conf directory?
>>> And why that DataNode is not running on remaining nodes?
>>>
>>> Please clarify this doubts, so that I can able to move ahead... :)
>>>
>>> Thank you,
>>> M Shaik
>>> --------------
>>>
>>
>>
>

Re: Need Help on Hadoop cluster Setup

Posted by Mohammad Tariq <do...@gmail.com>.
sorry for the typo in the second line of the 2nd point. the path will be
"/dfs.data.dir/current/VERSION".

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Fri, Mar 22, 2013 at 8:27 PM, Mohammad Tariq <do...@gmail.com> wrote:

> have you reformatted the hdfs?if that is the case it was, i think, not
> proper.
> were the nodes which you attached serving some other cluster earlier?your
> logs show that you are facing problems because of mismatch in the IDs of
> the NN and the IDs which DNs have. to overcome this problem you can
> follow these steps :
>
> 1 - Stop all teh DNs.
> 2 - Go to the directory which is serving as your dfs.data.dir. Inside this
> directiry
> you'll find a subdirectory ". there will be a file named as "VERSION"  in
> this
> directory. in this file you can see the namespaceID(probably the second
> line).
> change it to match the namespaceID which is there in
> "dfs.name.dir/current/VERSION"
> file.
> 3 - restart the processes.
>
> HTH
>
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Fri, Mar 22, 2013 at 8:04 PM, MShaik <ms...@aol.com> wrote:
>
>>  Hi,
>>
>>  DataNode is not started on all the nodes, as tasktracker is started on
>> all the nodes.
>>
>>  please find the below datanode log, please let me know the solution.
>>
>>  2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at
>> n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
>> 2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0
>> time(s).
>> 2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1
>> time(s).
>> 2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2
>> time(s).
>> 2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3
>> time(s).
>> 2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4
>> time(s).
>> 2013-03-22 19:52:49,162 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>> Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID
>> = 2050588793; datanode namespaceID = 503772406
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
>>  at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
>>
>>  2013-03-22 19:52:49,168 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
>> ************************************************************/
>>
>>
>> Thank's
>>
>> -----Original Message-----
>> From: Mohammad Tariq <do...@gmail.com>
>> To: user <us...@hadoop.apache.org>
>> Sent: Fri, Mar 22, 2013 7:07 pm
>> Subject: Re: Need Help on Hadoop cluster Setup
>>
>>  Hello Munavvar,
>>
>>        It depends on your configuration where your DNs and TTs will run.
>> If you have configured all your slaves to run both the processes then they
>> should. If they are not running then there is definitely some problem.
>> Could you please check your DN logs once and see if you find anything
>> unusual there. And you have to copy the files across all the machines.
>>
>>  You can do one more thing just to cross check. Point your web browser
>> to the HDFS web UI(master_machine:9000) to do that.
>>
>>  Warm Regards,
>> Tariq
>> https://mtariq.jux.com/
>>  cloudfront.blogspot.com
>>
>>
>> On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:
>>
>>>
>>> Hi ,
>>>
>>> I am new to Hadoop and I am fighting with this last 20days, somehow I
>>> got very good stuff on Hadoop.
>>>
>>> But, some question are roaming around me...I hope, I can get the answers
>>> from your end...!
>>>
>>> I was setup a cluster in distributed mode with 5 nodes. I have
>>> configured Namenode and DataNodes and all datannodes are able to loging
>>> from namenode without password.
>>> Hadoop and Java installed on same location in all the Nodes. After
>>> starting the cluster, I was check every node using with "jps" command.
>>> NameNode it was shows that all demons
>>> running(NameNode,JobTracker,SecondryNameNode).
>>> Same process is I applied for Datanodes. But, Some nodes only showing
>>> that TaskTracer running, only one node shows that DataNode and TaskTracker
>>> runs perfectly.
>>> My Question is that the configuration files are required to copy all the
>>> nodes which is located in $HADOOP_HOME/conf directory?
>>> And why that DataNode is not running on remaining nodes?
>>>
>>> Please clarify this doubts, so that I can able to move ahead... :)
>>>
>>> Thank you,
>>> M Shaik
>>> --------------
>>>
>>
>>
>

Re: Need Help on Hadoop cluster Setup

Posted by Mohammad Tariq <do...@gmail.com>.
have you reformatted the hdfs?if that is the case it was, i think, not
proper.
were the nodes which you attached serving some other cluster earlier?your
logs show that you are facing problems because of mismatch in the IDs of
the NN and the IDs which DNs have. to overcome this problem you can
follow these steps :

1 - Stop all teh DNs.
2 - Go to the directory which is serving as your dfs.data.dir. Inside this
directiry
you'll find a subdirectory ". there will be a file named as "VERSION"  in
this
directory. in this file you can see the namespaceID(probably the second
line).
change it to match the namespaceID which is there in
"dfs.name.dir/current/VERSION"
file.
3 - restart the processes.

HTH


Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Fri, Mar 22, 2013 at 8:04 PM, MShaik <ms...@aol.com> wrote:

>  Hi,
>
>  DataNode is not started on all the nodes, as tasktracker is started on
> all the nodes.
>
>  please find the below datanode log, please let me know the solution.
>
>  2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at
> n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
> 2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0 time(s).
> 2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1 time(s).
> 2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2 time(s).
> 2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3 time(s).
> 2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4 time(s).
> 2013-03-22 19:52:49,162 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID
> = 2050588793; datanode namespaceID = 503772406
>  at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
>
>  2013-03-22 19:52:49,168 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
> ************************************************************/
>
>
> Thank's
>
> -----Original Message-----
> From: Mohammad Tariq <do...@gmail.com>
> To: user <us...@hadoop.apache.org>
> Sent: Fri, Mar 22, 2013 7:07 pm
> Subject: Re: Need Help on Hadoop cluster Setup
>
>  Hello Munavvar,
>
>        It depends on your configuration where your DNs and TTs will run.
> If you have configured all your slaves to run both the processes then they
> should. If they are not running then there is definitely some problem.
> Could you please check your DN logs once and see if you find anything
> unusual there. And you have to copy the files across all the machines.
>
>  You can do one more thing just to cross check. Point your web browser to
> the HDFS web UI(master_machine:9000) to do that.
>
>  Warm Regards,
> Tariq
> https://mtariq.jux.com/
>  cloudfront.blogspot.com
>
>
> On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:
>
>>
>> Hi ,
>>
>> I am new to Hadoop and I am fighting with this last 20days, somehow I got
>> very good stuff on Hadoop.
>>
>> But, some question are roaming around me...I hope, I can get the answers
>> from your end...!
>>
>> I was setup a cluster in distributed mode with 5 nodes. I have configured
>> Namenode and DataNodes and all datannodes are able to loging from namenode
>> without password.
>> Hadoop and Java installed on same location in all the Nodes. After
>> starting the cluster, I was check every node using with "jps" command.
>> NameNode it was shows that all demons
>> running(NameNode,JobTracker,SecondryNameNode).
>> Same process is I applied for Datanodes. But, Some nodes only showing
>> that TaskTracer running, only one node shows that DataNode and TaskTracker
>> runs perfectly.
>> My Question is that the configuration files are required to copy all the
>> nodes which is located in $HADOOP_HOME/conf directory?
>> And why that DataNode is not running on remaining nodes?
>>
>> Please clarify this doubts, so that I can able to move ahead... :)
>>
>> Thank you,
>> M Shaik
>> --------------
>>
>
>

Re: Need Help on Hadoop cluster Setup

Posted by Mohammad Tariq <do...@gmail.com>.
have you reformatted the hdfs?if that is the case it was, i think, not
proper.
were the nodes which you attached serving some other cluster earlier?your
logs show that you are facing problems because of mismatch in the IDs of
the NN and the IDs which DNs have. to overcome this problem you can
follow these steps :

1 - Stop all teh DNs.
2 - Go to the directory which is serving as your dfs.data.dir. Inside this
directiry
you'll find a subdirectory ". there will be a file named as "VERSION"  in
this
directory. in this file you can see the namespaceID(probably the second
line).
change it to match the namespaceID which is there in
"dfs.name.dir/current/VERSION"
file.
3 - restart the processes.

HTH


Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Fri, Mar 22, 2013 at 8:04 PM, MShaik <ms...@aol.com> wrote:

>  Hi,
>
>  DataNode is not started on all the nodes, as tasktracker is started on
> all the nodes.
>
>  please find the below datanode log, please let me know the solution.
>
>  2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at
> n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
> 2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0 time(s).
> 2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1 time(s).
> 2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2 time(s).
> 2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3 time(s).
> 2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4 time(s).
> 2013-03-22 19:52:49,162 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID
> = 2050588793; datanode namespaceID = 503772406
>  at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
>
>  2013-03-22 19:52:49,168 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
> ************************************************************/
>
>
> Thank's
>
> -----Original Message-----
> From: Mohammad Tariq <do...@gmail.com>
> To: user <us...@hadoop.apache.org>
> Sent: Fri, Mar 22, 2013 7:07 pm
> Subject: Re: Need Help on Hadoop cluster Setup
>
>  Hello Munavvar,
>
>        It depends on your configuration where your DNs and TTs will run.
> If you have configured all your slaves to run both the processes then they
> should. If they are not running then there is definitely some problem.
> Could you please check your DN logs once and see if you find anything
> unusual there. And you have to copy the files across all the machines.
>
>  You can do one more thing just to cross check. Point your web browser to
> the HDFS web UI(master_machine:9000) to do that.
>
>  Warm Regards,
> Tariq
> https://mtariq.jux.com/
>  cloudfront.blogspot.com
>
>
> On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:
>
>>
>> Hi ,
>>
>> I am new to Hadoop and I am fighting with this last 20days, somehow I got
>> very good stuff on Hadoop.
>>
>> But, some question are roaming around me...I hope, I can get the answers
>> from your end...!
>>
>> I was setup a cluster in distributed mode with 5 nodes. I have configured
>> Namenode and DataNodes and all datannodes are able to loging from namenode
>> without password.
>> Hadoop and Java installed on same location in all the Nodes. After
>> starting the cluster, I was check every node using with "jps" command.
>> NameNode it was shows that all demons
>> running(NameNode,JobTracker,SecondryNameNode).
>> Same process is I applied for Datanodes. But, Some nodes only showing
>> that TaskTracer running, only one node shows that DataNode and TaskTracker
>> runs perfectly.
>> My Question is that the configuration files are required to copy all the
>> nodes which is located in $HADOOP_HOME/conf directory?
>> And why that DataNode is not running on remaining nodes?
>>
>> Please clarify this doubts, so that I can able to move ahead... :)
>>
>> Thank you,
>> M Shaik
>> --------------
>>
>
>

Re: Need Help on Hadoop cluster Setup

Posted by Mohammad Tariq <do...@gmail.com>.
have you reformatted the hdfs?if that is the case it was, i think, not
proper.
were the nodes which you attached serving some other cluster earlier?your
logs show that you are facing problems because of mismatch in the IDs of
the NN and the IDs which DNs have. to overcome this problem you can
follow these steps :

1 - Stop all teh DNs.
2 - Go to the directory which is serving as your dfs.data.dir. Inside this
directiry
you'll find a subdirectory ". there will be a file named as "VERSION"  in
this
directory. in this file you can see the namespaceID(probably the second
line).
change it to match the namespaceID which is there in
"dfs.name.dir/current/VERSION"
file.
3 - restart the processes.

HTH


Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Fri, Mar 22, 2013 at 8:04 PM, MShaik <ms...@aol.com> wrote:

>  Hi,
>
>  DataNode is not started on all the nodes, as tasktracker is started on
> all the nodes.
>
>  please find the below datanode log, please let me know the solution.
>
>  2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at
> n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
> 2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0 time(s).
> 2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1 time(s).
> 2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2 time(s).
> 2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3 time(s).
> 2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4 time(s).
> 2013-03-22 19:52:49,162 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID
> = 2050588793; datanode namespaceID = 503772406
>  at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
>
>  2013-03-22 19:52:49,168 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
> ************************************************************/
>
>
> Thank's
>
> -----Original Message-----
> From: Mohammad Tariq <do...@gmail.com>
> To: user <us...@hadoop.apache.org>
> Sent: Fri, Mar 22, 2013 7:07 pm
> Subject: Re: Need Help on Hadoop cluster Setup
>
>  Hello Munavvar,
>
>        It depends on your configuration where your DNs and TTs will run.
> If you have configured all your slaves to run both the processes then they
> should. If they are not running then there is definitely some problem.
> Could you please check your DN logs once and see if you find anything
> unusual there. And you have to copy the files across all the machines.
>
>  You can do one more thing just to cross check. Point your web browser to
> the HDFS web UI(master_machine:9000) to do that.
>
>  Warm Regards,
> Tariq
> https://mtariq.jux.com/
>  cloudfront.blogspot.com
>
>
> On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:
>
>>
>> Hi ,
>>
>> I am new to Hadoop and I am fighting with this last 20days, somehow I got
>> very good stuff on Hadoop.
>>
>> But, some question are roaming around me...I hope, I can get the answers
>> from your end...!
>>
>> I was setup a cluster in distributed mode with 5 nodes. I have configured
>> Namenode and DataNodes and all datannodes are able to loging from namenode
>> without password.
>> Hadoop and Java installed on same location in all the Nodes. After
>> starting the cluster, I was check every node using with "jps" command.
>> NameNode it was shows that all demons
>> running(NameNode,JobTracker,SecondryNameNode).
>> Same process is I applied for Datanodes. But, Some nodes only showing
>> that TaskTracer running, only one node shows that DataNode and TaskTracker
>> runs perfectly.
>> My Question is that the configuration files are required to copy all the
>> nodes which is located in $HADOOP_HOME/conf directory?
>> And why that DataNode is not running on remaining nodes?
>>
>> Please clarify this doubts, so that I can able to move ahead... :)
>>
>> Thank you,
>> M Shaik
>> --------------
>>
>
>

Re: Need Help on Hadoop cluster Setup

Posted by Mohammad Tariq <do...@gmail.com>.
have you reformatted the hdfs?if that is the case it was, i think, not
proper.
were the nodes which you attached serving some other cluster earlier?your
logs show that you are facing problems because of mismatch in the IDs of
the NN and the IDs which DNs have. to overcome this problem you can
follow these steps :

1 - Stop all teh DNs.
2 - Go to the directory which is serving as your dfs.data.dir. Inside this
directiry
you'll find a subdirectory ". there will be a file named as "VERSION"  in
this
directory. in this file you can see the namespaceID(probably the second
line).
change it to match the namespaceID which is there in
"dfs.name.dir/current/VERSION"
file.
3 - restart the processes.

HTH


Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Fri, Mar 22, 2013 at 8:04 PM, MShaik <ms...@aol.com> wrote:

>  Hi,
>
>  DataNode is not started on all the nodes, as tasktracker is started on
> all the nodes.
>
>  please find the below datanode log, please let me know the solution.
>
>  2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at
> n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
> 2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0 time(s).
> 2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1 time(s).
> 2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2 time(s).
> 2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3 time(s).
> 2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4 time(s).
> 2013-03-22 19:52:49,162 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID
> = 2050588793; datanode namespaceID = 503772406
>  at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
>  at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
>
>  2013-03-22 19:52:49,168 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
> ************************************************************/
>
>
> Thank's
>
> -----Original Message-----
> From: Mohammad Tariq <do...@gmail.com>
> To: user <us...@hadoop.apache.org>
> Sent: Fri, Mar 22, 2013 7:07 pm
> Subject: Re: Need Help on Hadoop cluster Setup
>
>  Hello Munavvar,
>
>        It depends on your configuration where your DNs and TTs will run.
> If you have configured all your slaves to run both the processes then they
> should. If they are not running then there is definitely some problem.
> Could you please check your DN logs once and see if you find anything
> unusual there. And you have to copy the files across all the machines.
>
>  You can do one more thing just to cross check. Point your web browser to
> the HDFS web UI(master_machine:9000) to do that.
>
>  Warm Regards,
> Tariq
> https://mtariq.jux.com/
>  cloudfront.blogspot.com
>
>
> On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:
>
>>
>> Hi ,
>>
>> I am new to Hadoop and I am fighting with this last 20days, somehow I got
>> very good stuff on Hadoop.
>>
>> But, some question are roaming around me...I hope, I can get the answers
>> from your end...!
>>
>> I was setup a cluster in distributed mode with 5 nodes. I have configured
>> Namenode and DataNodes and all datannodes are able to loging from namenode
>> without password.
>> Hadoop and Java installed on same location in all the Nodes. After
>> starting the cluster, I was check every node using with "jps" command.
>> NameNode it was shows that all demons
>> running(NameNode,JobTracker,SecondryNameNode).
>> Same process is I applied for Datanodes. But, Some nodes only showing
>> that TaskTracer running, only one node shows that DataNode and TaskTracker
>> runs perfectly.
>> My Question is that the configuration files are required to copy all the
>> nodes which is located in $HADOOP_HOME/conf directory?
>> And why that DataNode is not running on remaining nodes?
>>
>> Please clarify this doubts, so that I can able to move ahead... :)
>>
>> Thank you,
>> M Shaik
>> --------------
>>
>
>

Re: Need Help on Hadoop cluster Setup

Posted by MShaik <ms...@aol.com>.
Hi,


DataNode is not started on all the nodes, as tasktracker is started on all the nodes.


please find the below datanode log, please let me know the solution.


2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0 time(s).
2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1 time(s).
2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2 time(s).
2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3 time(s).
2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4 time(s).
2013-03-22 19:52:49,162 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID = 2050588793; datanode namespaceID = 503772406
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)


2013-03-22 19:52:49,168 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
************************************************************/


Thank's


-----Original Message-----
From: Mohammad Tariq <do...@gmail.com>
To: user <us...@hadoop.apache.org>
Sent: Fri, Mar 22, 2013 7:07 pm
Subject: Re: Need Help on Hadoop cluster Setup


Hello Munavvar,


      It depends on your configuration where your DNs and TTs will run. If you have configured all your slaves to run both the processes then they should. If they are not running then there is definitely some problem. Could you please check your DN logs once and see if you find anything unusual there. And you have to copy the files across all the machines.


You can do one more thing just to cross check. Point your web browser to the HDFS web UI(master_machine:9000) to do that.


Warm Regards,
Tariq
https://mtariq.jux.com/

cloudfront.blogspot.com





On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:


Hi ,



 
I am new to Hadoop and I am fighting with this last 20days, somehowI got very good stuff on Hadoop.
 
But, some question are roaming around me...I hope, I can getthe answers from your end...!
 
I was setup a cluster in distributed mode with 5 nodes. Ihave configured Namenode and DataNodes and all datannodes are able to logingfrom namenode without password.
Hadoop and Java installed on same location in all the Nodes.After starting the cluster, I was check every node using with "jps"command.
NameNode it was shows that all demonsrunning(NameNode,JobTracker,SecondryNameNode). 
Same process is I applied for Datanodes. But, Some nodesonly showing that TaskTracer running, only one node shows that DataNode andTaskTracker runs perfectly.
My Question is that the configuration files are required tocopy all the nodes which is located in $HADOOP_HOME/conf directory?
And why that DataNode is not running on remaining nodes?
 
Please clarify this doubts, so that I can able to moveahead... :)
 
Thank you,
M Shaik
--------------
 
 




 

Re: Need Help on Hadoop cluster Setup

Posted by MShaik <ms...@aol.com>.
Hi,


DataNode is not started on all the nodes, as tasktracker is started on all the nodes.


please find the below datanode log, please let me know the solution.


2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0 time(s).
2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1 time(s).
2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2 time(s).
2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3 time(s).
2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4 time(s).
2013-03-22 19:52:49,162 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID = 2050588793; datanode namespaceID = 503772406
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)


2013-03-22 19:52:49,168 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
************************************************************/


Thank's


-----Original Message-----
From: Mohammad Tariq <do...@gmail.com>
To: user <us...@hadoop.apache.org>
Sent: Fri, Mar 22, 2013 7:07 pm
Subject: Re: Need Help on Hadoop cluster Setup


Hello Munavvar,


      It depends on your configuration where your DNs and TTs will run. If you have configured all your slaves to run both the processes then they should. If they are not running then there is definitely some problem. Could you please check your DN logs once and see if you find anything unusual there. And you have to copy the files across all the machines.


You can do one more thing just to cross check. Point your web browser to the HDFS web UI(master_machine:9000) to do that.


Warm Regards,
Tariq
https://mtariq.jux.com/

cloudfront.blogspot.com





On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:


Hi ,



 
I am new to Hadoop and I am fighting with this last 20days, somehowI got very good stuff on Hadoop.
 
But, some question are roaming around me...I hope, I can getthe answers from your end...!
 
I was setup a cluster in distributed mode with 5 nodes. Ihave configured Namenode and DataNodes and all datannodes are able to logingfrom namenode without password.
Hadoop and Java installed on same location in all the Nodes.After starting the cluster, I was check every node using with "jps"command.
NameNode it was shows that all demonsrunning(NameNode,JobTracker,SecondryNameNode). 
Same process is I applied for Datanodes. But, Some nodesonly showing that TaskTracer running, only one node shows that DataNode andTaskTracker runs perfectly.
My Question is that the configuration files are required tocopy all the nodes which is located in $HADOOP_HOME/conf directory?
And why that DataNode is not running on remaining nodes?
 
Please clarify this doubts, so that I can able to moveahead... :)
 
Thank you,
M Shaik
--------------
 
 




 

Re: Need Help on Hadoop cluster Setup

Posted by MShaik <ms...@aol.com>.
Hi,


DataNode is not started on all the nodes, as tasktracker is started on all the nodes.


please find the below datanode log, please let me know the solution.


2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0 time(s).
2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1 time(s).
2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2 time(s).
2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3 time(s).
2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4 time(s).
2013-03-22 19:52:49,162 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID = 2050588793; datanode namespaceID = 503772406
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)


2013-03-22 19:52:49,168 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
************************************************************/


Thank's


-----Original Message-----
From: Mohammad Tariq <do...@gmail.com>
To: user <us...@hadoop.apache.org>
Sent: Fri, Mar 22, 2013 7:07 pm
Subject: Re: Need Help on Hadoop cluster Setup


Hello Munavvar,


      It depends on your configuration where your DNs and TTs will run. If you have configured all your slaves to run both the processes then they should. If they are not running then there is definitely some problem. Could you please check your DN logs once and see if you find anything unusual there. And you have to copy the files across all the machines.


You can do one more thing just to cross check. Point your web browser to the HDFS web UI(master_machine:9000) to do that.


Warm Regards,
Tariq
https://mtariq.jux.com/

cloudfront.blogspot.com





On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:


Hi ,



 
I am new to Hadoop and I am fighting with this last 20days, somehowI got very good stuff on Hadoop.
 
But, some question are roaming around me...I hope, I can getthe answers from your end...!
 
I was setup a cluster in distributed mode with 5 nodes. Ihave configured Namenode and DataNodes and all datannodes are able to logingfrom namenode without password.
Hadoop and Java installed on same location in all the Nodes.After starting the cluster, I was check every node using with "jps"command.
NameNode it was shows that all demonsrunning(NameNode,JobTracker,SecondryNameNode). 
Same process is I applied for Datanodes. But, Some nodesonly showing that TaskTracer running, only one node shows that DataNode andTaskTracker runs perfectly.
My Question is that the configuration files are required tocopy all the nodes which is located in $HADOOP_HOME/conf directory?
And why that DataNode is not running on remaining nodes?
 
Please clarify this doubts, so that I can able to moveahead... :)
 
Thank you,
M Shaik
--------------
 
 




 

Re: Need Help on Hadoop cluster Setup

Posted by MShaik <ms...@aol.com>.
Hi,


DataNode is not started on all the nodes, as tasktracker is started on all the nodes.


please find the below datanode log, please let me know the solution.


2013-03-22 19:52:27,380 INFO org.apache.hadoop.ipc.RPC: Server at n1.hc.com/192.168.1.110:54310 not available yet, Zzzzz...
2013-03-22 19:52:29,386 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 0 time(s).
2013-03-22 19:52:30,411 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 1 time(s).
2013-03-22 19:52:31,416 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 2 time(s).
2013-03-22 19:52:32,420 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 3 time(s).
2013-03-22 19:52:33,426 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: n1.hc.com/192.168.1.110:54310. Already tried 4 time(s).
2013-03-22 19:52:49,162 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /home/hduser/hadoopdata: namenode namespaceID = 2050588793; datanode namespaceID = 503772406
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)


2013-03-22 19:52:49,168 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at n4.hc.com/192.168.1.113
************************************************************/


Thank's


-----Original Message-----
From: Mohammad Tariq <do...@gmail.com>
To: user <us...@hadoop.apache.org>
Sent: Fri, Mar 22, 2013 7:07 pm
Subject: Re: Need Help on Hadoop cluster Setup


Hello Munavvar,


      It depends on your configuration where your DNs and TTs will run. If you have configured all your slaves to run both the processes then they should. If they are not running then there is definitely some problem. Could you please check your DN logs once and see if you find anything unusual there. And you have to copy the files across all the machines.


You can do one more thing just to cross check. Point your web browser to the HDFS web UI(master_machine:9000) to do that.


Warm Regards,
Tariq
https://mtariq.jux.com/

cloudfront.blogspot.com





On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:


Hi ,



 
I am new to Hadoop and I am fighting with this last 20days, somehowI got very good stuff on Hadoop.
 
But, some question are roaming around me...I hope, I can getthe answers from your end...!
 
I was setup a cluster in distributed mode with 5 nodes. Ihave configured Namenode and DataNodes and all datannodes are able to logingfrom namenode without password.
Hadoop and Java installed on same location in all the Nodes.After starting the cluster, I was check every node using with "jps"command.
NameNode it was shows that all demonsrunning(NameNode,JobTracker,SecondryNameNode). 
Same process is I applied for Datanodes. But, Some nodesonly showing that TaskTracer running, only one node shows that DataNode andTaskTracker runs perfectly.
My Question is that the configuration files are required tocopy all the nodes which is located in $HADOOP_HOME/conf directory?
And why that DataNode is not running on remaining nodes?
 
Please clarify this doubts, so that I can able to moveahead... :)
 
Thank you,
M Shaik
--------------
 
 




 

Re: Need Help on Hadoop cluster Setup

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Munavvar,

      It depends on your configuration where your DNs and TTs will run. If
you have configured all your slaves to run both the processes then they
should. If they are not running then there is definitely some problem.
Could you please check your DN logs once and see if you find anything
unusual there. And you have to copy the files across all the machines.

You can do one more thing just to cross check. Point your web browser to
the HDFS web UI(master_machine:9000) to do that.

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:

>
> Hi ,
>
> I am new to Hadoop and I am fighting with this last 20days, somehow I got
> very good stuff on Hadoop.
>
> But, some question are roaming around me...I hope, I can get the answers
> from your end...!
>
> I was setup a cluster in distributed mode with 5 nodes. I have configured
> Namenode and DataNodes and all datannodes are able to loging from namenode
> without password.
> Hadoop and Java installed on same location in all the Nodes. After
> starting the cluster, I was check every node using with "jps" command.
> NameNode it was shows that all demons
> running(NameNode,JobTracker,SecondryNameNode).
> Same process is I applied for Datanodes. But, Some nodes only showing that
> TaskTracer running, only one node shows that DataNode and TaskTracker runs
> perfectly.
> My Question is that the configuration files are required to copy all the
> nodes which is located in $HADOOP_HOME/conf directory?
> And why that DataNode is not running on remaining nodes?
>
> Please clarify this doubts, so that I can able to move ahead... :)
>
> Thank you,
> M Shaik
> --------------
>

Re: Need Help on Hadoop cluster Setup

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Munavvar,

      It depends on your configuration where your DNs and TTs will run. If
you have configured all your slaves to run both the processes then they
should. If they are not running then there is definitely some problem.
Could you please check your DN logs once and see if you find anything
unusual there. And you have to copy the files across all the machines.

You can do one more thing just to cross check. Point your web browser to
the HDFS web UI(master_machine:9000) to do that.

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:

>
> Hi ,
>
> I am new to Hadoop and I am fighting with this last 20days, somehow I got
> very good stuff on Hadoop.
>
> But, some question are roaming around me...I hope, I can get the answers
> from your end...!
>
> I was setup a cluster in distributed mode with 5 nodes. I have configured
> Namenode and DataNodes and all datannodes are able to loging from namenode
> without password.
> Hadoop and Java installed on same location in all the Nodes. After
> starting the cluster, I was check every node using with "jps" command.
> NameNode it was shows that all demons
> running(NameNode,JobTracker,SecondryNameNode).
> Same process is I applied for Datanodes. But, Some nodes only showing that
> TaskTracer running, only one node shows that DataNode and TaskTracker runs
> perfectly.
> My Question is that the configuration files are required to copy all the
> nodes which is located in $HADOOP_HOME/conf directory?
> And why that DataNode is not running on remaining nodes?
>
> Please clarify this doubts, so that I can able to move ahead... :)
>
> Thank you,
> M Shaik
> --------------
>

Re: Need Help on Hadoop cluster Setup

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Munavvar,

      It depends on your configuration where your DNs and TTs will run. If
you have configured all your slaves to run both the processes then they
should. If they are not running then there is definitely some problem.
Could you please check your DN logs once and see if you find anything
unusual there. And you have to copy the files across all the machines.

You can do one more thing just to cross check. Point your web browser to
the HDFS web UI(master_machine:9000) to do that.

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:

>
> Hi ,
>
> I am new to Hadoop and I am fighting with this last 20days, somehow I got
> very good stuff on Hadoop.
>
> But, some question are roaming around me...I hope, I can get the answers
> from your end...!
>
> I was setup a cluster in distributed mode with 5 nodes. I have configured
> Namenode and DataNodes and all datannodes are able to loging from namenode
> without password.
> Hadoop and Java installed on same location in all the Nodes. After
> starting the cluster, I was check every node using with "jps" command.
> NameNode it was shows that all demons
> running(NameNode,JobTracker,SecondryNameNode).
> Same process is I applied for Datanodes. But, Some nodes only showing that
> TaskTracer running, only one node shows that DataNode and TaskTracker runs
> perfectly.
> My Question is that the configuration files are required to copy all the
> nodes which is located in $HADOOP_HOME/conf directory?
> And why that DataNode is not running on remaining nodes?
>
> Please clarify this doubts, so that I can able to move ahead... :)
>
> Thank you,
> M Shaik
> --------------
>

Re: Need Help on Hadoop cluster Setup

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Munavvar,

      It depends on your configuration where your DNs and TTs will run. If
you have configured all your slaves to run both the processes then they
should. If they are not running then there is definitely some problem.
Could you please check your DN logs once and see if you find anything
unusual there. And you have to copy the files across all the machines.

You can do one more thing just to cross check. Point your web browser to
the HDFS web UI(master_machine:9000) to do that.

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Fri, Mar 22, 2013 at 6:44 PM, Munnavar Sk <ms...@aol.com> wrote:

>
> Hi ,
>
> I am new to Hadoop and I am fighting with this last 20days, somehow I got
> very good stuff on Hadoop.
>
> But, some question are roaming around me...I hope, I can get the answers
> from your end...!
>
> I was setup a cluster in distributed mode with 5 nodes. I have configured
> Namenode and DataNodes and all datannodes are able to loging from namenode
> without password.
> Hadoop and Java installed on same location in all the Nodes. After
> starting the cluster, I was check every node using with "jps" command.
> NameNode it was shows that all demons
> running(NameNode,JobTracker,SecondryNameNode).
> Same process is I applied for Datanodes. But, Some nodes only showing that
> TaskTracer running, only one node shows that DataNode and TaskTracker runs
> perfectly.
> My Question is that the configuration files are required to copy all the
> nodes which is located in $HADOOP_HOME/conf directory?
> And why that DataNode is not running on remaining nodes?
>
> Please clarify this doubts, so that I can able to move ahead... :)
>
> Thank you,
> M Shaik
> --------------
>

Fwd: Need Help on Hadoop cluster Setup

Posted by Munnavar Sk <ms...@aol.com>.
Hi ,



 
I am new to Hadoop and I am fighting with this last 20days, somehowI got very good stuff on Hadoop.
 
But, some question are roaming around me...I hope, I can getthe answers from your end...!
 
I was setup a cluster in distributed mode with 5 nodes. Ihave configured Namenode and DataNodes and all datannodes are able to logingfrom namenode without password.
Hadoop and Java installed on same location in all the Nodes.After starting the cluster, I was check every node using with "jps"command.
NameNode it was shows that all demonsrunning(NameNode,JobTracker,SecondryNameNode). 
Same process is I applied for Datanodes. But, Some nodesonly showing that TaskTracer running, only one node shows that DataNode andTaskTracker runs perfectly.
My Question is that the configuration files are required tocopy all the nodes which is located in $HADOOP_HOME/conf directory?
And why that DataNode is not running on remaining nodes?
 
Please clarify this doubts, so that I can able to moveahead... :)
 
Thank you,
M Shaik
--------------
 
 

Fwd: Need Help on Hadoop cluster Setup

Posted by Munnavar Sk <ms...@aol.com>.
Hi ,



 
I am new to Hadoop and I am fighting with this last 20days, somehowI got very good stuff on Hadoop.
 
But, some question are roaming around me...I hope, I can getthe answers from your end...!
 
I was setup a cluster in distributed mode with 5 nodes. Ihave configured Namenode and DataNodes and all datannodes are able to logingfrom namenode without password.
Hadoop and Java installed on same location in all the Nodes.After starting the cluster, I was check every node using with "jps"command.
NameNode it was shows that all demonsrunning(NameNode,JobTracker,SecondryNameNode). 
Same process is I applied for Datanodes. But, Some nodesonly showing that TaskTracer running, only one node shows that DataNode andTaskTracker runs perfectly.
My Question is that the configuration files are required tocopy all the nodes which is located in $HADOOP_HOME/conf directory?
And why that DataNode is not running on remaining nodes?
 
Please clarify this doubts, so that I can able to moveahead... :)
 
Thank you,
M Shaik
--------------
 
 

Fwd: Need Help on Hadoop cluster Setup

Posted by Munnavar Sk <ms...@aol.com>.
Hi ,



 
I am new to Hadoop and I am fighting with this last 20days, somehowI got very good stuff on Hadoop.
 
But, some question are roaming around me...I hope, I can getthe answers from your end...!
 
I was setup a cluster in distributed mode with 5 nodes. Ihave configured Namenode and DataNodes and all datannodes are able to logingfrom namenode without password.
Hadoop and Java installed on same location in all the Nodes.After starting the cluster, I was check every node using with "jps"command.
NameNode it was shows that all demonsrunning(NameNode,JobTracker,SecondryNameNode). 
Same process is I applied for Datanodes. But, Some nodesonly showing that TaskTracer running, only one node shows that DataNode andTaskTracker runs perfectly.
My Question is that the configuration files are required tocopy all the nodes which is located in $HADOOP_HOME/conf directory?
And why that DataNode is not running on remaining nodes?
 
Please clarify this doubts, so that I can able to moveahead... :)
 
Thank you,
M Shaik
--------------
 
 

Fwd: Need Help on Hadoop cluster Setup

Posted by Munnavar Sk <ms...@aol.com>.
Hi ,



 
I am new to Hadoop and I am fighting with this last 20days, somehowI got very good stuff on Hadoop.
 
But, some question are roaming around me...I hope, I can getthe answers from your end...!
 
I was setup a cluster in distributed mode with 5 nodes. Ihave configured Namenode and DataNodes and all datannodes are able to logingfrom namenode without password.
Hadoop and Java installed on same location in all the Nodes.After starting the cluster, I was check every node using with "jps"command.
NameNode it was shows that all demonsrunning(NameNode,JobTracker,SecondryNameNode). 
Same process is I applied for Datanodes. But, Some nodesonly showing that TaskTracer running, only one node shows that DataNode andTaskTracker runs perfectly.
My Question is that the configuration files are required tocopy all the nodes which is located in $HADOOP_HOME/conf directory?
And why that DataNode is not running on remaining nodes?
 
Please clarify this doubts, so that I can able to moveahead... :)
 
Thank you,
M Shaik
--------------