You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by us latha <us...@gmail.com> on 2008/07/06 12:16:06 UTC
unable to run wordcount example on two node cluster
Hi All,
I have the following setup
( Node 1, 2 are redhat linux 4; Node 3,4 are redhat linux 3)
Node1 -> namenode
Node2 -> job tracker
Node3 -> slave (data node)
Node4 -> slave (data node)
was able to input some data to the datanodes and then
could find that the data is properly getting stored on data nodes.
[NODE1]$ bin/hadoop dfs -ls
Found 3 items
drwxr-xr-x - user1 supergroup 0 2008-07-04 07:10
/user/user1/input
drwxr-xr-x - user1 supergroup 0 2008-07-04 09:17
/user/user1/test3
-rw-r--r-- 3 user1 supergroup 3951 2008-07-04 07:10
/user/user1/wordcount.jar
Now, am trying to run wordcount example mentioned in the link
http://hadoop.apache.org/core/docs/r0.17.0/mapred_tutorial.html
Followed steps:
1)[NODE1]$ javac -classpath ${HADOOP_HOME}/ hadoop-0.17.1-core.jar -d
wordcount_classes WordCount.java
2) [NODE1]$jar -cvf wordcount.jar -C wordcount_classes/ .
3)[NODE1]$ bin/hadoop dfs -copyFromLocal wordcount.jar wordcount.jar
4) [NODE1]$ bin/hadoop jar wordcount.jar org.myorg.WordCount input output
The output is as follows
[NODE1]$ bin/hadoop jar wordcount.jar org.myorg.WordCount input output2
08/07/06 03:10:23 INFO mapred.FileInputFormat: Total input paths to process
: 3
08/07/06 03:10:23 INFO mapred.FileInputFormat: Total input paths to process
: 3
08/07/06 03:10:24 INFO mapred.JobClient: Running job: job_200806290715_0027
08/07/06 03:10:25 INFO mapred.JobClient: map 0% reduce 0%
*** It hangs here forever****
****The log file on the node1 (namenode) says ...****
java.io.IOException: Inconsistent checkpoint fileds. LV = -16 namespaceID =
315235321 cTime = 0. Expecting respectively: -16; 902613609; 0
at
org.apache.hadoop.dfs.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:65)
at
org.apache.hadoop.dfs.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:568)
at
org.apache.hadoop.dfs.SecondaryNameNode$CheckpointStorage.access$000(SecondaryNameNode.java:464)
at
org.apache.hadoop.dfs.SecondaryNameNode.doMerge(SecondaryNameNode.java:341)
at
org.apache.hadoop.dfs.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:305)
at
org.apache.hadoop.dfs.SecondaryNameNode.run(SecondaryNameNode.java:216)
"hadoop-suravako-secondarynamenode-stapj13.out" 15744L, 1409088C
-------------------------------------------------------------
Please let me know if i am missing something and please help me resolve the
above issue.
I shall provide any specific log info if required.
Thankyou
Srilatha
Re: unable to run wordcount example on two node cluster
Posted by Marc Hofer <ma...@web.de>.
See:
http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster)
java.io.IOException: Incompatible namespaceIDs
us latha schrieb:
> Please let me know if i am missing something and please help me resolve the
> above issue.
>
> I shall provide any specific log info if required.
>
> Thankyou
>
> Srilatha
>
>
Re: unable to run wordcount example on two node cluster
Posted by CTS-RAAJ <ra...@cognizant.com>.
Hi Latha,
Please check your hadoop-site.xml files in the slaves, it should have the
entries for the master datanodes. I will suggest to clean the data store anf
reformat it or create a new one by specufying dir entry in the master's
hadoop-site.xml file.
us latha wrote:
>
> Hi All,
>
> I have the following setup
> ( Node 1, 2 are redhat linux 4; Node 3,4 are redhat linux 3)
>
> Node1 -> namenode
> Node2 -> job tracker
> Node3 -> slave (data node)
> Node4 -> slave (data node)
>
> was able to input some data to the datanodes and then
> could find that the data is properly getting stored on data nodes.
>
> [NODE1]$ bin/hadoop dfs -ls
>
> Found 3 items
> drwxr-xr-x - user1 supergroup 0 2008-07-04 07:10
> /user/user1/input
> drwxr-xr-x - user1 supergroup 0 2008-07-04 09:17
> /user/user1/test3
> -rw-r--r-- 3 user1 supergroup 3951 2008-07-04 07:10
> /user/user1/wordcount.jar
>
>
> Now, am trying to run wordcount example mentioned in the link
> http://hadoop.apache.org/core/docs/r0.17.0/mapred_tutorial.html
>
> Followed steps:
>
> 1)[NODE1]$ javac -classpath ${HADOOP_HOME}/ hadoop-0.17.1-core.jar -d
> wordcount_classes WordCount.java
>
> 2) [NODE1]$jar -cvf wordcount.jar -C wordcount_classes/ .
>
> 3)[NODE1]$ bin/hadoop dfs -copyFromLocal wordcount.jar wordcount.jar
>
> 4) [NODE1]$ bin/hadoop jar wordcount.jar org.myorg.WordCount input output
>
> The output is as follows
>
> [NODE1]$ bin/hadoop jar wordcount.jar org.myorg.WordCount input output2
> 08/07/06 03:10:23 INFO mapred.FileInputFormat: Total input paths to
> process
> : 3
> 08/07/06 03:10:23 INFO mapred.FileInputFormat: Total input paths to
> process
> : 3
> 08/07/06 03:10:24 INFO mapred.JobClient: Running job:
> job_200806290715_0027
> 08/07/06 03:10:25 INFO mapred.JobClient: map 0% reduce 0%
>
> *** It hangs here forever****
>
> ****The log file on the node1 (namenode) says ...****
>
> java.io.IOException: Inconsistent checkpoint fileds. LV = -16 namespaceID
> =
> 315235321 cTime = 0. Expecting respectively: -16; 902613609; 0
> at
> org.apache.hadoop.dfs.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:65)
> at
> org.apache.hadoop.dfs.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:568)
> at
> org.apache.hadoop.dfs.SecondaryNameNode$CheckpointStorage.access$000(SecondaryNameNode.java:464)
> at
> org.apache.hadoop.dfs.SecondaryNameNode.doMerge(SecondaryNameNode.java:341)
> at
> org.apache.hadoop.dfs.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:305)
> at
> org.apache.hadoop.dfs.SecondaryNameNode.run(SecondaryNameNode.java:216)
> "hadoop-suravako-secondarynamenode-stapj13.out" 15744L, 1409088C
>
> -------------------------------------------------------------
>
> Please let me know if i am missing something and please help me resolve
> the
> above issue.
>
> I shall provide any specific log info if required.
>
> Thankyou
>
> Srilatha
>
>
--
View this message in context: http://www.nabble.com/unable-to-run-wordcount-example-on-two-node-cluster-tp18300606p18332085.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.