You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Thibaut_ <tb...@blue.lu> on 2008/07/10 23:33:44 UTC

Version Mismatch when accessing hdfs through a nonhadoop java application?

Hi, I'm trying to access the hdfs of my hadoop cluster in a non hadoop
application. Hadoop 0.17.1 is running on standart ports

This is the code I use:

FileSystem fileSystem = null;
		String hdfsurl = "hdfs://localhost:50010";
fileSystem = new DistributedFileSystem();

		try {
			fileSystem.initialize(new URI(hdfsurl), new Configuration());
		} catch (Exception e) {
			e.printStackTrace();
			System.out.println("init error:");
			System.exit(1);

		}


which fails with the exception:


java.net.SocketTimeoutException: timed out waiting for rpc response
	at org.apache.hadoop.ipc.Client.call(Client.java:559)
	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)
	at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown Source)
	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:313)
	at org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:102)
	at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:178)
	at
org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:68)
	at com.iterend.spider.conf.Config.getRemoteFileSystem(Config.java:72)
	at tests.RemoteFileSystemTest.main(RemoteFileSystemTest.java:22)
init error:


The haddop logfile contains the following error:

2008-07-10 23:05:47,840 INFO org.apache.hadoop.dfs.Storage: Storage
directory \hadoop\tmp\hadoop-sshd_server\dfs\data is not formatted.
2008-07-10 23:05:47,840 INFO org.apache.hadoop.dfs.Storage: Formatting ...
2008-07-10 23:05:47,928 INFO org.apache.hadoop.dfs.DataNode: Registered
FSDatasetStatusMBean
2008-07-10 23:05:47,929 INFO org.apache.hadoop.dfs.DataNode: Opened server
at 50010
2008-07-10 23:05:47,933 INFO org.apache.hadoop.dfs.DataNode: Balancing
bandwith is 1048576 bytes/s
2008-07-10 23:05:48,128 INFO org.mortbay.util.Credential: Checking Resource
aliases
2008-07-10 23:05:48,344 INFO org.mortbay.http.HttpServer: Version
Jetty/5.1.4
2008-07-10 23:05:48,346 INFO org.mortbay.util.Container: Started
HttpContext[/static,/static]
2008-07-10 23:05:48,346 INFO org.mortbay.util.Container: Started
HttpContext[/logs,/logs]
2008-07-10 23:05:49,047 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.servlet.WebApplicationHandler@15bc6c8
2008-07-10 23:05:49,244 INFO org.mortbay.util.Container: Started
WebApplicationContext[/,/]
2008-07-10 23:05:49,247 INFO org.mortbay.http.SocketListener: Started
SocketListener on 0.0.0.0:50075
2008-07-10 23:05:49,247 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.Server@47a0d4
2008-07-10 23:05:49,257 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=DataNode, sessionId=null
2008-07-10 23:05:49,535 INFO org.apache.hadoop.dfs.DataNode: New storage id
DS-2117780943-192.168.1.130-50010-1215723949510 is assigned to data-node
127.0.0.1:50010
2008-07-10 23:05:49,586 INFO org.apache.hadoop.dfs.DataNode:
127.0.0.1:50010In DataNode.run, data =
FSDataset{dirpath='c:\hadoop\tmp\hadoop-sshd_server\dfs\data\current'}
2008-07-10 23:05:49,586 INFO org.apache.hadoop.dfs.DataNode: using
BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 60000msec
2008-07-10 23:06:04,636 INFO org.apache.hadoop.dfs.DataNode: BlockReport of
0 blocks got processed in 11 msecs
2008-07-10 23:19:54,512 ERROR org.apache.hadoop.dfs.DataNode:
127.0.0.1:50010:DataXceiver: java.io.IOException: Version Mismatch
	at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:961)
	at java.lang.Thread.run(Thread.java:619)


Any ideas how I can fix this? The haddop cluster and my application are both
using the same hadoop jar!

Thanks for your help,
Thibaut
-- 
View this message in context: http://www.nabble.com/Version-Mismatch-when-accessing-hdfs-through-a-nonhadoop-java-application--tp18392343p18392343.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Re: Version Mismatch when accessing hdfs through a nonhadoop java application?

Posted by Thibaut_ <tb...@blue.lu>.


Jason Venner-2 wrote:
> 
> When you compile from svn, the svn state number becomes part of the 
> required version for hdfs - the last time I looked at it was 0.15.3 but 
> it may still be happening.
> 
> 
Hi Jason,

Client and server are using the same library file (I checked it again,
hadoop-0.17.1-core.jar), so this shouldn't be a problem (both should be
using it)? I also had the same problem with earlier versions.


This is the startup message of the datanode

/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = bluelu-PC/192.168.1.130
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.17.1
STARTUP_MSG:   build =
http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r 669344;
compiled by 'hadoopqa' on Thu Jun 19 01:18:25 UTC 2008


Thibaut
-- 
View this message in context: http://www.nabble.com/Version-Mismatch-when-accessing-hdfs-through-a-nonhadoop-java-application--tp18392343p18482013.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Re: Version Mismatch when accessing hdfs through a nonhadoop java application?

Posted by Jason Venner <ja...@attributor.com>.
When you compile from svn, the svn state number becomes part of the 
required version for hdfs - the last time I looked at it was 0.15.3 but 
it may still be happening.



Raghu Angadi wrote:
> Check the log from NameNode and DataNode. Most common reason is that 
> you might be running older version one of these by accident.
>
> The version is mentioned at the top of the log.
>
> Raghu.
>
> Thibaut_ wrote:
>> Hi,
>>
>> It's pretty clear that both versions differ. I just can't make out any
>> reason (except that maybe the transfer verion of the build version is 
>> higher
>> than the one I use (I triple checked that I always use the same hadoop
>> version!).
>>
>> Unfortunately, compiling hadoop fails with an error on my machine 
>> (must be
>> windows related), so I have difficulties building a custom 
>> hadoop-core to
>> see what version each versions have.
>>
>> Also, I'm unable to post a bug report? I always get redirected to the 
>> list
>> page? It would be very helpful if someone else could look into it, or at
>> least confirm the bug. The code is all there in my first email.
>>
>> Thanks,
>> Thibaut
>>
>>
>>
>> Shengkai Zhu wrote:
>>> I've check cod ed in DataNode.java, exactly where you get the error;
>>>
>>> *...*
>>> *DataInputStream in=null;*
>>> *in = new DataInputStream(
>>>             new BufferedInputStream(s.getInputStream(), BUFFER_SIZE));
>>> short version = in.readShort();
>>> if ( version != DATA_TRANFER_VERSION ) {
>>>      throw new IOException( "Version Mismatch" );
>>> }*
>>> *...*
>>>
>>> May be useful for you.
>>>
>>> On 7/11/08, Thibaut_ <tb...@blue.lu> wrote:
>>>>
>>>> Hi, I'm trying to access the hdfs of my hadoop cluster in a non hadoop
>>>> application. Hadoop 0.17.1 is running on standart ports
>>>>
>>>> This is the code I use:
>>>>
>>>> FileSystem fileSystem = null;
>>>>                String hdfsurl = "hdfs://localhost:50010";
>>>> fileSystem = new DistributedFileSystem();
>>>>
>>>>                try {
>>>>                        fileSystem.initialize(new URI(hdfsurl), new
>>>> Configuration());
>>>>                } catch (Exception e) {
>>>>                        e.printStackTrace();
>>>>                        System.out.println("init error:");
>>>>                        System.exit(1);
>>>>
>>>>                }
>>>>
>>>>
>>>> which fails with the exception:
>>>>
>>>>
>>>> java.net.SocketTimeoutException: timed out waiting for rpc response
>>>>        at org.apache.hadoop.ipc.Client.call(Client.java:559)
>>>>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)
>>>>        at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown
>>>> Source)
>>>>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:313)
>>>>        at
>>>> org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:102)
>>>>        at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:178)
>>>>        at
>>>>
>>>> org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:68) 
>>>>
>>>>        at
>>>> com.iterend.spider.conf.Config.getRemoteFileSystem(Config.java:72)
>>>>        at 
>>>> tests.RemoteFileSystemTest.main(RemoteFileSystemTest.java:22)
>>>> init error:
>>>>

Re: Version Mismatch when accessing hdfs through a nonhadoop java application?

Posted by Raghu Angadi <ra...@yahoo-inc.com>.
Check the log from NameNode and DataNode. Most common reason is that you 
might be running older version one of these by accident.

The version is mentioned at the top of the log.

Raghu.

Thibaut_ wrote:
> Hi,
> 
> It's pretty clear that both versions differ. I just can't make out any
> reason (except that maybe the transfer verion of the build version is higher
> than the one I use (I triple checked that I always use the same hadoop
> version!).
> 
> Unfortunately, compiling hadoop fails with an error on my machine (must be
> windows related), so I have difficulties building a custom hadoop-core to
> see what version each versions have.
> 
> Also, I'm unable to post a bug report? I always get redirected to the list
> page? It would be very helpful if someone else could look into it, or at
> least confirm the bug. The code is all there in my first email.
> 
> Thanks,
> Thibaut
> 
> 
> 
> Shengkai Zhu wrote:
>> I've check cod ed in DataNode.java, exactly where you get the error;
>>
>> *...*
>> *DataInputStream in=null;*
>> *in = new DataInputStream(
>>             new BufferedInputStream(s.getInputStream(), BUFFER_SIZE));
>> short version = in.readShort();
>> if ( version != DATA_TRANFER_VERSION ) {
>>      throw new IOException( "Version Mismatch" );
>> }*
>> *...*
>>
>> May be useful for you.
>>
>> On 7/11/08, Thibaut_ <tb...@blue.lu> wrote:
>>>
>>> Hi, I'm trying to access the hdfs of my hadoop cluster in a non hadoop
>>> application. Hadoop 0.17.1 is running on standart ports
>>>
>>> This is the code I use:
>>>
>>> FileSystem fileSystem = null;
>>>                String hdfsurl = "hdfs://localhost:50010";
>>> fileSystem = new DistributedFileSystem();
>>>
>>>                try {
>>>                        fileSystem.initialize(new URI(hdfsurl), new
>>> Configuration());
>>>                } catch (Exception e) {
>>>                        e.printStackTrace();
>>>                        System.out.println("init error:");
>>>                        System.exit(1);
>>>
>>>                }
>>>
>>>
>>> which fails with the exception:
>>>
>>>
>>> java.net.SocketTimeoutException: timed out waiting for rpc response
>>>        at org.apache.hadoop.ipc.Client.call(Client.java:559)
>>>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)
>>>        at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown
>>> Source)
>>>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:313)
>>>        at
>>> org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:102)
>>>        at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:178)
>>>        at
>>>
>>> org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:68)
>>>        at
>>> com.iterend.spider.conf.Config.getRemoteFileSystem(Config.java:72)
>>>        at tests.RemoteFileSystemTest.main(RemoteFileSystemTest.java:22)
>>> init error:
>>>

Re: Version Mismatch when accessing hdfs through a nonhadoop java application?

Posted by Thibaut_ <tb...@blue.lu>.
Hi,

It's pretty clear that both versions differ. I just can't make out any
reason (except that maybe the transfer verion of the build version is higher
than the one I use (I triple checked that I always use the same hadoop
version!).

Unfortunately, compiling hadoop fails with an error on my machine (must be
windows related), so I have difficulties building a custom hadoop-core to
see what version each versions have.

Also, I'm unable to post a bug report? I always get redirected to the list
page? It would be very helpful if someone else could look into it, or at
least confirm the bug. The code is all there in my first email.

Thanks,
Thibaut



Shengkai Zhu wrote:
> 
> I've check cod ed in DataNode.java, exactly where you get the error;
> 
> *...*
> *DataInputStream in=null;*
> *in = new DataInputStream(
>             new BufferedInputStream(s.getInputStream(), BUFFER_SIZE));
> short version = in.readShort();
> if ( version != DATA_TRANFER_VERSION ) {
>      throw new IOException( "Version Mismatch" );
> }*
> *...*
> 
> May be useful for you.
> 
> On 7/11/08, Thibaut_ <tb...@blue.lu> wrote:
>>
>>
>> Hi, I'm trying to access the hdfs of my hadoop cluster in a non hadoop
>> application. Hadoop 0.17.1 is running on standart ports
>>
>> This is the code I use:
>>
>> FileSystem fileSystem = null;
>>                String hdfsurl = "hdfs://localhost:50010";
>> fileSystem = new DistributedFileSystem();
>>
>>                try {
>>                        fileSystem.initialize(new URI(hdfsurl), new
>> Configuration());
>>                } catch (Exception e) {
>>                        e.printStackTrace();
>>                        System.out.println("init error:");
>>                        System.exit(1);
>>
>>                }
>>
>>
>> which fails with the exception:
>>
>>
>> java.net.SocketTimeoutException: timed out waiting for rpc response
>>        at org.apache.hadoop.ipc.Client.call(Client.java:559)
>>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)
>>        at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown
>> Source)
>>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:313)
>>        at
>> org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:102)
>>        at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:178)
>>        at
>>
>> org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:68)
>>        at
>> com.iterend.spider.conf.Config.getRemoteFileSystem(Config.java:72)
>>        at tests.RemoteFileSystemTest.main(RemoteFileSystemTest.java:22)
>> init error:
>>
>>
>> The haddop logfile contains the following error:
>>
>> 2008-07-10 23:05:47,840 INFO org.apache.hadoop.dfs.Storage: Storage
>> directory \hadoop\tmp\hadoop-sshd_server\dfs\data is not formatted.
>> 2008-07-10 23:05:47,840 INFO org.apache.hadoop.dfs.Storage: Formatting
>> ...
>> 2008-07-10 23:05:47,928 INFO org.apache.hadoop.dfs.DataNode: Registered
>> FSDatasetStatusMBean
>> 2008-07-10 23:05:47,929 INFO org.apache.hadoop.dfs.DataNode: Opened
>> server
>> at 50010
>> 2008-07-10 23:05:47,933 INFO org.apache.hadoop.dfs.DataNode: Balancing
>> bandwith is 1048576 bytes/s
>> 2008-07-10 23:05:48,128 INFO org.mortbay.util.Credential: Checking
>> Resource
>> aliases
>> 2008-07-10 23:05:48,344 INFO org.mortbay.http.HttpServer: Version
>> Jetty/5.1.4
>> 2008-07-10 23:05:48,346 INFO org.mortbay.util.Container: Started
>> HttpContext[/static,/static]
>> 2008-07-10 23:05:48,346 INFO org.mortbay.util.Container: Started
>> HttpContext[/logs,/logs]
>> 2008-07-10 23:05:49,047 INFO org.mortbay.util.Container: Started
>> org.mortbay.jetty.servlet.WebApplicationHandler@15bc6c8
>> 2008-07-10 23:05:49,244 INFO org.mortbay.util.Container: Started
>> WebApplicationContext[/,/]
>> 2008-07-10 23:05:49,247 INFO org.mortbay.http.SocketListener: Started
>> SocketListener on 0.0.0.0:50075
>> 2008-07-10 23:05:49,247 INFO org.mortbay.util.Container: Started
>> org.mortbay.jetty.Server@47a0d4
>> 2008-07-10 23:05:49,257 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=DataNode, sessionId=null
>> 2008-07-10 23:05:49,535 INFO org.apache.hadoop.dfs.DataNode: New storage
>> id
>> DS-2117780943-192.168.1.130-50010-1215723949510 is assigned to data-node
>> 127.0.0.1:50010
>> 2008-07-10 23:05:49,586 INFO org.apache.hadoop.dfs.DataNode:
>> 127.0.0.1:50010In DataNode.run, data =
>> FSDataset{dirpath='c:\hadoop\tmp\hadoop-sshd_server\dfs\data\current'}
>> 2008-07-10 23:05:49,586 INFO org.apache.hadoop.dfs.DataNode: using
>> BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 60000msec
>> 2008-07-10 23:06:04,636 INFO org.apache.hadoop.dfs.DataNode: BlockReport
>> of
>> 0 blocks got processed in 11 msecs
>> 2008-07-10 23:19:54,512 ERROR org.apache.hadoop.dfs.DataNode:
>> 127.0.0.1:50010:DataXceiver: java.io.IOException: Version Mismatch
>>        at
>> org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:961)
>>        at java.lang.Thread.run(Thread.java:619)
>>
>>
>> Any ideas how I can fix this? The haddop cluster and my application are
>> both
>> using the same hadoop jar!
>>
>> Thanks for your help,
>> Thibaut
>> --
>> View this message in context:
>> http://www.nabble.com/Version-Mismatch-when-accessing-hdfs-through-a-nonhadoop-java-application--tp18392343p18392343.html
>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: http://www.nabble.com/Version-Mismatch-when-accessing-hdfs-through-a-nonhadoop-java-application--tp18392343p18475775.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Re: Version Mismatch when accessing hdfs through a nonhadoop java application?

Posted by Shengkai Zhu <ge...@gmail.com>.
I've check cod ed in DataNode.java, exactly where you get the error;

*...*
*DataInputStream in=null;*
*in = new DataInputStream(
            new BufferedInputStream(s.getInputStream(), BUFFER_SIZE));
short version = in.readShort();
if ( version != DATA_TRANFER_VERSION ) {
     throw new IOException( "Version Mismatch" );
}*
*...*

May be useful for you.

On 7/11/08, Thibaut_ <tb...@blue.lu> wrote:
>
>
> Hi, I'm trying to access the hdfs of my hadoop cluster in a non hadoop
> application. Hadoop 0.17.1 is running on standart ports
>
> This is the code I use:
>
> FileSystem fileSystem = null;
>                String hdfsurl = "hdfs://localhost:50010";
> fileSystem = new DistributedFileSystem();
>
>                try {
>                        fileSystem.initialize(new URI(hdfsurl), new
> Configuration());
>                } catch (Exception e) {
>                        e.printStackTrace();
>                        System.out.println("init error:");
>                        System.exit(1);
>
>                }
>
>
> which fails with the exception:
>
>
> java.net.SocketTimeoutException: timed out waiting for rpc response
>        at org.apache.hadoop.ipc.Client.call(Client.java:559)
>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)
>        at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown Source)
>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:313)
>        at
> org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:102)
>        at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:178)
>        at
>
> org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:68)
>        at
> com.iterend.spider.conf.Config.getRemoteFileSystem(Config.java:72)
>        at tests.RemoteFileSystemTest.main(RemoteFileSystemTest.java:22)
> init error:
>
>
> The haddop logfile contains the following error:
>
> 2008-07-10 23:05:47,840 INFO org.apache.hadoop.dfs.Storage: Storage
> directory \hadoop\tmp\hadoop-sshd_server\dfs\data is not formatted.
> 2008-07-10 23:05:47,840 INFO org.apache.hadoop.dfs.Storage: Formatting ...
> 2008-07-10 23:05:47,928 INFO org.apache.hadoop.dfs.DataNode: Registered
> FSDatasetStatusMBean
> 2008-07-10 23:05:47,929 INFO org.apache.hadoop.dfs.DataNode: Opened server
> at 50010
> 2008-07-10 23:05:47,933 INFO org.apache.hadoop.dfs.DataNode: Balancing
> bandwith is 1048576 bytes/s
> 2008-07-10 23:05:48,128 INFO org.mortbay.util.Credential: Checking Resource
> aliases
> 2008-07-10 23:05:48,344 INFO org.mortbay.http.HttpServer: Version
> Jetty/5.1.4
> 2008-07-10 23:05:48,346 INFO org.mortbay.util.Container: Started
> HttpContext[/static,/static]
> 2008-07-10 23:05:48,346 INFO org.mortbay.util.Container: Started
> HttpContext[/logs,/logs]
> 2008-07-10 23:05:49,047 INFO org.mortbay.util.Container: Started
> org.mortbay.jetty.servlet.WebApplicationHandler@15bc6c8
> 2008-07-10 23:05:49,244 INFO org.mortbay.util.Container: Started
> WebApplicationContext[/,/]
> 2008-07-10 23:05:49,247 INFO org.mortbay.http.SocketListener: Started
> SocketListener on 0.0.0.0:50075
> 2008-07-10 23:05:49,247 INFO org.mortbay.util.Container: Started
> org.mortbay.jetty.Server@47a0d4
> 2008-07-10 23:05:49,257 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2008-07-10 23:05:49,535 INFO org.apache.hadoop.dfs.DataNode: New storage id
> DS-2117780943-192.168.1.130-50010-1215723949510 is assigned to data-node
> 127.0.0.1:50010
> 2008-07-10 23:05:49,586 INFO org.apache.hadoop.dfs.DataNode:
> 127.0.0.1:50010In DataNode.run, data =
> FSDataset{dirpath='c:\hadoop\tmp\hadoop-sshd_server\dfs\data\current'}
> 2008-07-10 23:05:49,586 INFO org.apache.hadoop.dfs.DataNode: using
> BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 60000msec
> 2008-07-10 23:06:04,636 INFO org.apache.hadoop.dfs.DataNode: BlockReport of
> 0 blocks got processed in 11 msecs
> 2008-07-10 23:19:54,512 ERROR org.apache.hadoop.dfs.DataNode:
> 127.0.0.1:50010:DataXceiver: java.io.IOException: Version Mismatch
>        at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:961)
>        at java.lang.Thread.run(Thread.java:619)
>
>
> Any ideas how I can fix this? The haddop cluster and my application are
> both
> using the same hadoop jar!
>
> Thanks for your help,
> Thibaut
> --
> View this message in context:
> http://www.nabble.com/Version-Mismatch-when-accessing-hdfs-through-a-nonhadoop-java-application--tp18392343p18392343.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>