You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Ved Prakash <me...@gmail.com> on 2008/03/05 07:50:50 UTC

clustering problem

Hi Guys,

I am having problems creating clusters on 2 machines

Machine configuration :
Master : OS: Fedora core 7
             hadoop-0.15.2

hadoop-site.xml listing

<configuration>
  <property>
    <name>fs.default.name</name>
    <value>anaconda:50001</value>
  </property>
  <property>
    <name>mapred.job.tracker</name>
    <value>anaconda:50002</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>2</value>
  </property>
  <property>
    <name>dfs.secondary.info.port</name>
    <value>50003</value>
  </property>
  <property>
    <name>dfs.info.port</name>
    <value>50004</value>
  </property>
  <property>
    <name>mapred.job.tracker.info.port</name>
    <value>50005</value>
  </property>
  <property>
    <name>tasktracker.http.port</name>
    <value>50006</value>
  </property>
</configuration>

conf/masters
localhost

conf/slaves
anaconda
v-desktop

the datanode, namenode, secondarynamenode seems to be working fine on the
master but on slave this is not the case

slave
OS: Ubuntu

hadoop-site.xml listing

same as master

in the logs on slave machine I see this

2008-03-05 12:15:25,705 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=DataNode, sessionId=null
2008-03-05 12:15:25,920 FATAL org.apache.hadoop.dfs.DataNode: Incompatible
build versions: namenode BV = Unknown; datanode BV = 607333
2008-03-05 12:15:25,926 ERROR org.apache.hadoop.dfs.DataNode:
java.io.IOException: Incompatible build versions: namenode BV = Unknown;
datanode BV = 607333
        at org.apache.hadoop.dfs.DataNode.handshake(DataNode.java:316)
        at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:238)
        at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:206)
        at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:1575)
        at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1519)
        at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:1540)
        at org.apache.hadoop.dfs.DataNode.main(DataNode.java:1711)

Can someone help me with this please.

Thanks

Ved

Re: 答复: clustering problem

Posted by Ved Prakash <me...@gmail.com>.
Hi Zhang,

Thanks for your reply, I tried this but no use. It still throws up
Incompatible build versions.

I removed the dfs local directory on slave and issued start-dfs.sh on
server, and when I checked the logs it showed up with the same problem.

Do you guys need some more information from my side to have a better
understanding about the problem.

Please let me know,

Thanks

Ved


Zhang, Guibin wrote:
> 
> You can delete the DFS local dir in the slave (The local dictionary should
> be ${hadoop.tmp.dir}/dfs/) and try again.
> 
> 
> -----邮件原件-----
> 发件人: Ved Prakash [mailto:meramailinglist@gmail.com] 
> 发送时间: 2008年3月5日 14:51
> 收件人: core-user@hadoop.apache.org
> 主题: clustering problem
> 
> Hi Guys,
> 
> I am having problems creating clusters on 2 machines
> 
> Machine configuration :
> Master : OS: Fedora core 7
>              hadoop-0.15.2
> 
> hadoop-site.xml listing
> 
> <configuration>
>   <property>
>     <name>fs.default.name</name>
>     <value>anaconda:50001</value>
>   </property>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>anaconda:50002</value>
>   </property>
>   <property>
>     <name>dfs.replication</name>
>     <value>2</value>
>   </property>
>   <property>
>     <name>dfs.secondary.info.port</name>
>     <value>50003</value>
>   </property>
>   <property>
>     <name>dfs.info.port</name>
>     <value>50004</value>
>   </property>
>   <property>
>     <name>mapred.job.tracker.info.port</name>
>     <value>50005</value>
>   </property>
>   <property>
>     <name>tasktracker.http.port</name>
>     <value>50006</value>
>   </property>
> </configuration>
> 
> conf/masters
> localhost
> 
> conf/slaves
> anaconda
> v-desktop
> 
> the datanode, namenode, secondarynamenode seems to be working fine on the
> master but on slave this is not the case
> 
> slave
> OS: Ubuntu
> 
> hadoop-site.xml listing
> 
> same as master
> 
> in the logs on slave machine I see this
> 
> 2008-03-05 12:15:25,705 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2008-03-05 12:15:25,920 FATAL org.apache.hadoop.dfs.DataNode: Incompatible
> build versions: namenode BV = Unknown; datanode BV = 607333
> 2008-03-05 12:15:25,926 ERROR org.apache.hadoop.dfs.DataNode:
> java.io.IOException: Incompatible build versions: namenode BV = Unknown;
> datanode BV = 607333
>         at org.apache.hadoop.dfs.DataNode.handshake(DataNode.java:316)
>         at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:238)
>         at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:206)
>         at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:1575)
>         at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1519)
>         at
> org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:1540)
>         at org.apache.hadoop.dfs.DataNode.main(DataNode.java:1711)
> 
> Can someone help me with this please.
> 
> Thanks
> 
> Ved
> 
> 

-- 
View this message in context: http://www.nabble.com/clustering-problem-tp15844291p15847694.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Re: 答复: clustering problem

Posted by Ved Prakash <me...@gmail.com>.
Hi,

I found the solution for the problem I have posted, I would post the
resolution here so that others may benefit from this.

the incompatibility was showing on my slave was because of incompatible java
installed on my slave. I removed the current java installation from slave
and installed the same version as I have on my master and that solved the
problem.

Thanks all, for your responses.

Ved

2008/3/5 Ved Prakash <me...@gmail.com>:

> Hi Miles,
>
> Yes, I have hadoop-0.15.2 installed on both my systems.
>
> Ved
>
> 2008/3/5 Miles Osborne <mi...@inf.ed.ac.uk>:
>
> Did you use exactly the same version of Hadoop on each and every node?
> >
> > Miles
> >
> > On 05/03/2008, Ved Prakash <me...@gmail.com> wrote:
> > >
> > > Hi Zhang,
> > >
> > > Thanks for your reply, I tried this but no use. It still throws up
> > > Incompatible build versions.
> > >
> > > I removed the dfs local directory on slave and issued start-dfs.sh on
> > > server, and when I checked the logs it showed up with the same
> > problem.
> > >
> > > Do you guys need some more information from my side to have a better
> > > understanding about the problem.
> > >
> > > Please let me know,
> > >
> > > Thanks
> > >
> > > Ved
> > >
> > > 2008/3/5 Zhang, Guibin <gz...@freewheel.tv>:
> > >
> > > > You can delete the DFS local dir in the slave (The local dictionary
> > > should
> > > > be ${hadoop.tmp.dir}/dfs/) and try again.
> > > >
> > > >
> > > > -----邮件原件-----
> > > > 发件人: Ved Prakash [mailto:meramailinglist@gmail.com]
> > > > 发送时间: 2008年3月5日 14:51
> > > > 收件人: core-user@hadoop.apache.org
> > > > 主题: clustering problem
> > > >
> > > > Hi Guys,
> > > >
> > > > I am having problems creating clusters on 2 machines
> > > >
> > > > Machine configuration :
> > > > Master : OS: Fedora core 7
> > > >             hadoop-0.15.2
> > > >
> > > > hadoop-site.xml listing
> > > >
> > > > <configuration>
> > > >  <property>
> > > >    <name>fs.default.name</name>
> > > >    <value>anaconda:50001</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>mapred.job.tracker</name>
> > > >    <value>anaconda:50002</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>dfs.replication</name>
> > > >    <value>2</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>dfs.secondary.info.port</name>
> > > >    <value>50003</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>dfs.info.port</name>
> > > >    <value>50004</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>mapred.job.tracker.info.port</name>
> > > >    <value>50005</value>
> > > >  </property>
> > > >  <property>
> > > >    <name>tasktracker.http.port</name>
> > > >    <value>50006</value>
> > > >  </property>
> > > > </configuration>
> > > >
> > > > conf/masters
> > > > localhost
> > > >
> > > > conf/slaves
> > > > anaconda
> > > > v-desktop
> > > >
> > > > the datanode, namenode, secondarynamenode seems to be working fine
> > on
> > > the
> > > > master but on slave this is not the case
> > > >
> > > > slave
> > > > OS: Ubuntu
> > > >
> > > > hadoop-site.xml listing
> > > >
> > > > same as master
> > > >
> > > > in the logs on slave machine I see this
> > > >
> > > > 2008-03-05 12:15:25,705 INFO
> > org.apache.hadoop.metrics.jvm.JvmMetrics:
> > > > Initializing JVM Metrics with processName=DataNode, sessionId=null
> > > > 2008-03-05 12:15:25,920 FATAL org.apache.hadoop.dfs.DataNode:
> > > Incompatible
> > > > build versions: namenode BV = Unknown; datanode BV = 607333
> > > > 2008-03-05 12:15:25,926 ERROR org.apache.hadoop.dfs.DataNode:
> > > > java.io.IOException: Incompatible build versions: namenode BV =
> > Unknown;
> > > > datanode BV = 607333
> > > >        at org.apache.hadoop.dfs.DataNode.handshake(DataNode.java
> > :316)
> > > >        at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java
> > > :238)
> > > >        at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:206)
> > > >        at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java
> > > :1575)
> > > >        at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1519)
> > > >        at org.apache.hadoop.dfs.DataNode.createDataNode(
> > DataNode.java
> > > > :1540)
> > > >        at org.apache.hadoop.dfs.DataNode.main(DataNode.java:1711)
> > > >
> > > > Can someone help me with this please.
> > > >
> > > > Thanks
> > > >
> > > > Ved
> > > >
> > >
> >
> >
> >
> > --
> > The University of Edinburgh is a charitable body, registered in
> > Scotland,
> > with registration number SC005336.
> >
>
>

Re: 答复: clustering problem

Posted by Ved Prakash <me...@gmail.com>.
Hi Miles,

Yes, I have hadoop-0.15.2 installed on both my systems.

Ved

2008/3/5 Miles Osborne <mi...@inf.ed.ac.uk>:

> Did you use exactly the same version of Hadoop on each and every node?
>
> Miles
>
> On 05/03/2008, Ved Prakash <me...@gmail.com> wrote:
> >
> > Hi Zhang,
> >
> > Thanks for your reply, I tried this but no use. It still throws up
> > Incompatible build versions.
> >
> > I removed the dfs local directory on slave and issued start-dfs.sh on
> > server, and when I checked the logs it showed up with the same problem.
> >
> > Do you guys need some more information from my side to have a better
> > understanding about the problem.
> >
> > Please let me know,
> >
> > Thanks
> >
> > Ved
> >
> > 2008/3/5 Zhang, Guibin <gz...@freewheel.tv>:
> >
> > > You can delete the DFS local dir in the slave (The local dictionary
> > should
> > > be ${hadoop.tmp.dir}/dfs/) and try again.
> > >
> > >
> > > -----邮件原件-----
> > > 发件人: Ved Prakash [mailto:meramailinglist@gmail.com]
> > > 发送时间: 2008年3月5日 14:51
> > > 收件人: core-user@hadoop.apache.org
> > > 主题: clustering problem
> > >
> > > Hi Guys,
> > >
> > > I am having problems creating clusters on 2 machines
> > >
> > > Machine configuration :
> > > Master : OS: Fedora core 7
> > >             hadoop-0.15.2
> > >
> > > hadoop-site.xml listing
> > >
> > > <configuration>
> > >  <property>
> > >    <name>fs.default.name</name>
> > >    <value>anaconda:50001</value>
> > >  </property>
> > >  <property>
> > >    <name>mapred.job.tracker</name>
> > >    <value>anaconda:50002</value>
> > >  </property>
> > >  <property>
> > >    <name>dfs.replication</name>
> > >    <value>2</value>
> > >  </property>
> > >  <property>
> > >    <name>dfs.secondary.info.port</name>
> > >    <value>50003</value>
> > >  </property>
> > >  <property>
> > >    <name>dfs.info.port</name>
> > >    <value>50004</value>
> > >  </property>
> > >  <property>
> > >    <name>mapred.job.tracker.info.port</name>
> > >    <value>50005</value>
> > >  </property>
> > >  <property>
> > >    <name>tasktracker.http.port</name>
> > >    <value>50006</value>
> > >  </property>
> > > </configuration>
> > >
> > > conf/masters
> > > localhost
> > >
> > > conf/slaves
> > > anaconda
> > > v-desktop
> > >
> > > the datanode, namenode, secondarynamenode seems to be working fine on
> > the
> > > master but on slave this is not the case
> > >
> > > slave
> > > OS: Ubuntu
> > >
> > > hadoop-site.xml listing
> > >
> > > same as master
> > >
> > > in the logs on slave machine I see this
> > >
> > > 2008-03-05 12:15:25,705 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> > > Initializing JVM Metrics with processName=DataNode, sessionId=null
> > > 2008-03-05 12:15:25,920 FATAL org.apache.hadoop.dfs.DataNode:
> > Incompatible
> > > build versions: namenode BV = Unknown; datanode BV = 607333
> > > 2008-03-05 12:15:25,926 ERROR org.apache.hadoop.dfs.DataNode:
> > > java.io.IOException: Incompatible build versions: namenode BV =
> Unknown;
> > > datanode BV = 607333
> > >        at org.apache.hadoop.dfs.DataNode.handshake(DataNode.java:316)
> > >        at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java
> > :238)
> > >        at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:206)
> > >        at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java
> > :1575)
> > >        at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1519)
> > >        at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java
> > > :1540)
> > >        at org.apache.hadoop.dfs.DataNode.main(DataNode.java:1711)
> > >
> > > Can someone help me with this please.
> > >
> > > Thanks
> > >
> > > Ved
> > >
> >
>
>
>
> --
> The University of Edinburgh is a charitable body, registered in Scotland,
> with registration number SC005336.
>

Re: 答复: clustering problem

Posted by Miles Osborne <mi...@inf.ed.ac.uk>.
Did you use exactly the same version of Hadoop on each and every node?

Miles

On 05/03/2008, Ved Prakash <me...@gmail.com> wrote:
>
> Hi Zhang,
>
> Thanks for your reply, I tried this but no use. It still throws up
> Incompatible build versions.
>
> I removed the dfs local directory on slave and issued start-dfs.sh on
> server, and when I checked the logs it showed up with the same problem.
>
> Do you guys need some more information from my side to have a better
> understanding about the problem.
>
> Please let me know,
>
> Thanks
>
> Ved
>
> 2008/3/5 Zhang, Guibin <gz...@freewheel.tv>:
>
> > You can delete the DFS local dir in the slave (The local dictionary
> should
> > be ${hadoop.tmp.dir}/dfs/) and try again.
> >
> >
> > -----邮件原件-----
> > 发件人: Ved Prakash [mailto:meramailinglist@gmail.com]
> > 发送时间: 2008年3月5日 14:51
> > 收件人: core-user@hadoop.apache.org
> > 主题: clustering problem
> >
> > Hi Guys,
> >
> > I am having problems creating clusters on 2 machines
> >
> > Machine configuration :
> > Master : OS: Fedora core 7
> >             hadoop-0.15.2
> >
> > hadoop-site.xml listing
> >
> > <configuration>
> >  <property>
> >    <name>fs.default.name</name>
> >    <value>anaconda:50001</value>
> >  </property>
> >  <property>
> >    <name>mapred.job.tracker</name>
> >    <value>anaconda:50002</value>
> >  </property>
> >  <property>
> >    <name>dfs.replication</name>
> >    <value>2</value>
> >  </property>
> >  <property>
> >    <name>dfs.secondary.info.port</name>
> >    <value>50003</value>
> >  </property>
> >  <property>
> >    <name>dfs.info.port</name>
> >    <value>50004</value>
> >  </property>
> >  <property>
> >    <name>mapred.job.tracker.info.port</name>
> >    <value>50005</value>
> >  </property>
> >  <property>
> >    <name>tasktracker.http.port</name>
> >    <value>50006</value>
> >  </property>
> > </configuration>
> >
> > conf/masters
> > localhost
> >
> > conf/slaves
> > anaconda
> > v-desktop
> >
> > the datanode, namenode, secondarynamenode seems to be working fine on
> the
> > master but on slave this is not the case
> >
> > slave
> > OS: Ubuntu
> >
> > hadoop-site.xml listing
> >
> > same as master
> >
> > in the logs on slave machine I see this
> >
> > 2008-03-05 12:15:25,705 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> > Initializing JVM Metrics with processName=DataNode, sessionId=null
> > 2008-03-05 12:15:25,920 FATAL org.apache.hadoop.dfs.DataNode:
> Incompatible
> > build versions: namenode BV = Unknown; datanode BV = 607333
> > 2008-03-05 12:15:25,926 ERROR org.apache.hadoop.dfs.DataNode:
> > java.io.IOException: Incompatible build versions: namenode BV = Unknown;
> > datanode BV = 607333
> >        at org.apache.hadoop.dfs.DataNode.handshake(DataNode.java:316)
> >        at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java
> :238)
> >        at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:206)
> >        at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java
> :1575)
> >        at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1519)
> >        at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java
> > :1540)
> >        at org.apache.hadoop.dfs.DataNode.main(DataNode.java:1711)
> >
> > Can someone help me with this please.
> >
> > Thanks
> >
> > Ved
> >
>



-- 
The University of Edinburgh is a charitable body, registered in Scotland,
with registration number SC005336.

Re: 答复: clustering problem

Posted by Ved Prakash <me...@gmail.com>.
Hi Zhang,

Thanks for your reply, I tried this but no use. It still throws up
Incompatible build versions.

I removed the dfs local directory on slave and issued start-dfs.sh on
server, and when I checked the logs it showed up with the same problem.

Do you guys need some more information from my side to have a better
understanding about the problem.

Please let me know,

Thanks

Ved

2008/3/5 Zhang, Guibin <gz...@freewheel.tv>:

> You can delete the DFS local dir in the slave (The local dictionary should
> be ${hadoop.tmp.dir}/dfs/) and try again.
>
>
> -----邮件原件-----
> 发件人: Ved Prakash [mailto:meramailinglist@gmail.com]
> 发送时间: 2008年3月5日 14:51
> 收件人: core-user@hadoop.apache.org
> 主题: clustering problem
>
> Hi Guys,
>
> I am having problems creating clusters on 2 machines
>
> Machine configuration :
> Master : OS: Fedora core 7
>             hadoop-0.15.2
>
> hadoop-site.xml listing
>
> <configuration>
>  <property>
>    <name>fs.default.name</name>
>    <value>anaconda:50001</value>
>  </property>
>  <property>
>    <name>mapred.job.tracker</name>
>    <value>anaconda:50002</value>
>  </property>
>  <property>
>    <name>dfs.replication</name>
>    <value>2</value>
>  </property>
>  <property>
>    <name>dfs.secondary.info.port</name>
>    <value>50003</value>
>  </property>
>  <property>
>    <name>dfs.info.port</name>
>    <value>50004</value>
>  </property>
>  <property>
>    <name>mapred.job.tracker.info.port</name>
>    <value>50005</value>
>  </property>
>  <property>
>    <name>tasktracker.http.port</name>
>    <value>50006</value>
>  </property>
> </configuration>
>
> conf/masters
> localhost
>
> conf/slaves
> anaconda
> v-desktop
>
> the datanode, namenode, secondarynamenode seems to be working fine on the
> master but on slave this is not the case
>
> slave
> OS: Ubuntu
>
> hadoop-site.xml listing
>
> same as master
>
> in the logs on slave machine I see this
>
> 2008-03-05 12:15:25,705 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2008-03-05 12:15:25,920 FATAL org.apache.hadoop.dfs.DataNode: Incompatible
> build versions: namenode BV = Unknown; datanode BV = 607333
> 2008-03-05 12:15:25,926 ERROR org.apache.hadoop.dfs.DataNode:
> java.io.IOException: Incompatible build versions: namenode BV = Unknown;
> datanode BV = 607333
>        at org.apache.hadoop.dfs.DataNode.handshake(DataNode.java:316)
>        at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:238)
>        at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:206)
>        at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:1575)
>        at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1519)
>        at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java
> :1540)
>        at org.apache.hadoop.dfs.DataNode.main(DataNode.java:1711)
>
> Can someone help me with this please.
>
> Thanks
>
> Ved
>

答复: clustering problem

Posted by "Zhang, Guibin" <gz...@freewheel.tv>.
You can delete the DFS local dir in the slave (The local dictionary should be ${hadoop.tmp.dir}/dfs/) and try again.


-----邮件原件-----
发件人: Ved Prakash [mailto:meramailinglist@gmail.com] 
发送时间: 2008年3月5日 14:51
收件人: core-user@hadoop.apache.org
主题: clustering problem

Hi Guys,

I am having problems creating clusters on 2 machines

Machine configuration :
Master : OS: Fedora core 7
             hadoop-0.15.2

hadoop-site.xml listing

<configuration>
  <property>
    <name>fs.default.name</name>
    <value>anaconda:50001</value>
  </property>
  <property>
    <name>mapred.job.tracker</name>
    <value>anaconda:50002</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>2</value>
  </property>
  <property>
    <name>dfs.secondary.info.port</name>
    <value>50003</value>
  </property>
  <property>
    <name>dfs.info.port</name>
    <value>50004</value>
  </property>
  <property>
    <name>mapred.job.tracker.info.port</name>
    <value>50005</value>
  </property>
  <property>
    <name>tasktracker.http.port</name>
    <value>50006</value>
  </property>
</configuration>

conf/masters
localhost

conf/slaves
anaconda
v-desktop

the datanode, namenode, secondarynamenode seems to be working fine on the
master but on slave this is not the case

slave
OS: Ubuntu

hadoop-site.xml listing

same as master

in the logs on slave machine I see this

2008-03-05 12:15:25,705 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=DataNode, sessionId=null
2008-03-05 12:15:25,920 FATAL org.apache.hadoop.dfs.DataNode: Incompatible
build versions: namenode BV = Unknown; datanode BV = 607333
2008-03-05 12:15:25,926 ERROR org.apache.hadoop.dfs.DataNode:
java.io.IOException: Incompatible build versions: namenode BV = Unknown;
datanode BV = 607333
        at org.apache.hadoop.dfs.DataNode.handshake(DataNode.java:316)
        at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:238)
        at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:206)
        at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:1575)
        at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1519)
        at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:1540)
        at org.apache.hadoop.dfs.DataNode.main(DataNode.java:1711)

Can someone help me with this please.

Thanks

Ved