You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by "Ratner, Alan S (IS)" <Al...@ngc.com> on 2011/08/18 14:14:28 UTC

Version Mismatch

We have a version mismatch problem which may be Hadoop related but may be due to a third party product we are using that requires us to run Zookeeper and Hadoop.  This product is rumored to soon be an Apache incubator project.  As I am not sure what I can reveal about this third party program prior to its release to Apache I will refer to it as XXX.

We are running Hadoop 0.20.203.0.  We have no problems running Hadoop at all.  It runs our Hadoop programs and our Hadoop fs commands without any version mismatch complaints.  Localhost:50030 and 50070 both report we are running 0.20.203.0, r1099333.

But when we try to initialize XXX we get "org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client = 60, server = 61)
org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client = 60, server = 61)".  The developers of XXX tell me that this error is coming from HDFS and is unrelated to their program.  (XXX does not include any Hadoop or Zookeeper jar files - as HBase does - but simply grabs these from HADOOP_HOME which points to our 0.20.203.0 installation and ZOOKEEPER_HOME.)


1.    What exactly does "client = 60" mean?  Which Hadoop version is this referring to?

2.    What exactly does "server = 61" mean?  Which Hadoop version is this referring to?

3.    Any ideas on whether this is a problem with my Hadoop configuration or whether this is a problem with XXX?


17 15:20:56,564 [security.Groups] INFO : Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
17 15:20:56,704 [conf.Configuration] WARN : mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
17 15:20:56,771 [util.Initialize] FATAL: org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client = 60, server = 61)
org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client = 60, server = 61)
     at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:231)
     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
     at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:156)
     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:255)
     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:222)
     at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:94)
     at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1734)
     at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:74)
     at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1768)
     at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1750)
     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:234)
     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:131)

Alan


Re: Version Mismatch

Posted by Matt Foley <mf...@hortonworks.com>.
Hi Alan,
It seems your XXX application incorporates a DFSClient, which implies it is
compiled in the presence of certain Hadoop jar files.  If it grabs those jar
files and incorporates them in the XXX installable package (tarball, rpm,
whatever), then it's easy to get this kind of mis-match.  Evidently, the
Hadoop client jars incorporated in XXX are running an older version 60 of
the RPC protocol, while your Hadoop service is running a newer version 61.

Possible solutions:

1. You could get your XXX supplier to provide a new compilation using the
newer version of Hadoop client jars.  The benefit to this approach is that
they'll test with the new jars and assure compatibility.  But when some new
version of Hadoop comes along that uses RPC version 62, you'll have the same
problem again.

2. The better solution is to configure your CLASSPATH so that jars from your
Hadoop installation are loaded before jars from the XXX installation.  The
order of entries in the CLASSPATH is very important to achieve this.

Is XXX installed on the same servers as Hadoop? Or on a separate server
where Hadoop is not installed?  If the latter, you will need to install
Hadoop on the XXX server, even though you won't run any Hadoop services
there, so that the up-to-date jars will be available for solution #2.

Hope this helps,
--Matt


On Thu, Aug 18, 2011 at 5:21 AM, Joey Echeverria <jo...@cloudera.com> wrote:

> It means your HDFS client jars are using a different RPC version than
> your namenode and datanodes. Are you sure that XXX has $HADOOP_HOME in
> it's classpath? It really looks like it's pointing to the wrong jars.
>
> -Joey
>
> On Thu, Aug 18, 2011 at 8:14 AM, Ratner, Alan S (IS)
> <Al...@ngc.com> wrote:
> > We have a version mismatch problem which may be Hadoop related but may be
> due to a third party product we are using that requires us to run Zookeeper
> and Hadoop.  This product is rumored to soon be an Apache incubator project.
>  As I am not sure what I can reveal about this third party program prior to
> its release to Apache I will refer to it as XXX.
> >
> > We are running Hadoop 0.20.203.0.  We have no problems running Hadoop at
> all.  It runs our Hadoop programs and our Hadoop fs commands without any
> version mismatch complaints.  Localhost:50030 and 50070 both report we are
> running 0.20.203.0, r1099333.
> >
> > But when we try to initialize XXX we get
> "org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol
> org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client =
> 60, server = 61)
> > org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol
> org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client =
> 60, server = 61)".  The developers of XXX tell me that this error is coming
> from HDFS and is unrelated to their program.  (XXX does not include any
> Hadoop or Zookeeper jar files - as HBase does - but simply grabs these from
> HADOOP_HOME which points to our 0.20.203.0 installation and ZOOKEEPER_HOME.)
> >
> >
> > 1.    What exactly does "client = 60" mean?  Which Hadoop version is this
> referring to?
> >
> > 2.    What exactly does "server = 61" mean?  Which Hadoop version is this
> referring to?
> >
> > 3.    Any ideas on whether this is a problem with my Hadoop configuration
> or whether this is a problem with XXX?
> >
> >
> > 17 15:20:56,564 [security.Groups] INFO : Group mapping
> impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
> cacheTimeout=300000
> > 17 15:20:56,704 [conf.Configuration] WARN : mapred.task.id is
> deprecated. Instead, use mapreduce.task.attempt.id
> > 17 15:20:56,771 [util.Initialize] FATAL:
> org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol
> org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client =
> 60, server = 61)
> > org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol
> org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client =
> 60, server = 61)
> >     at
> org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:231)
> >     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
> >     at
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:156)
> >     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:255)
> >     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:222)
> >     at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:94)
> >     at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1734)
> >     at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:74)
> >     at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1768)
> >     at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1750)
> >     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:234)
> >     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:131)
> >
> > Alan
> >
> >
>
>
>
> --
> Joseph Echeverria
> Cloudera, Inc.
> 443.305.9434
>

Re: Version Mismatch

Posted by Joey Echeverria <jo...@cloudera.com>.
It means your HDFS client jars are using a different RPC version than
your namenode and datanodes. Are you sure that XXX has $HADOOP_HOME in
it's classpath? It really looks like it's pointing to the wrong jars.

-Joey

On Thu, Aug 18, 2011 at 8:14 AM, Ratner, Alan S (IS)
<Al...@ngc.com> wrote:
> We have a version mismatch problem which may be Hadoop related but may be due to a third party product we are using that requires us to run Zookeeper and Hadoop.  This product is rumored to soon be an Apache incubator project.  As I am not sure what I can reveal about this third party program prior to its release to Apache I will refer to it as XXX.
>
> We are running Hadoop 0.20.203.0.  We have no problems running Hadoop at all.  It runs our Hadoop programs and our Hadoop fs commands without any version mismatch complaints.  Localhost:50030 and 50070 both report we are running 0.20.203.0, r1099333.
>
> But when we try to initialize XXX we get "org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client = 60, server = 61)
> org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client = 60, server = 61)".  The developers of XXX tell me that this error is coming from HDFS and is unrelated to their program.  (XXX does not include any Hadoop or Zookeeper jar files - as HBase does - but simply grabs these from HADOOP_HOME which points to our 0.20.203.0 installation and ZOOKEEPER_HOME.)
>
>
> 1.    What exactly does "client = 60" mean?  Which Hadoop version is this referring to?
>
> 2.    What exactly does "server = 61" mean?  Which Hadoop version is this referring to?
>
> 3.    Any ideas on whether this is a problem with my Hadoop configuration or whether this is a problem with XXX?
>
>
> 17 15:20:56,564 [security.Groups] INFO : Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
> 17 15:20:56,704 [conf.Configuration] WARN : mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
> 17 15:20:56,771 [util.Initialize] FATAL: org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client = 60, server = 61)
> org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client = 60, server = 61)
>     at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:231)
>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
>     at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:156)
>     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:255)
>     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:222)
>     at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:94)
>     at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1734)
>     at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:74)
>     at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1768)
>     at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1750)
>     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:234)
>     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:131)
>
> Alan
>
>



-- 
Joseph Echeverria
Cloudera, Inc.
443.305.9434