You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Doug Cutting (JIRA)" <ji...@apache.org> on 2006/06/28 04:29:31 UTC
[jira] Commented: (HADOOP-329) ClassCastException in DFSClient
[ http://issues.apache.org/jira/browse/HADOOP-329?page=comments#action_12418154 ]
Doug Cutting commented on HADOOP-329:
-------------------------------------
This is something that unit tests should have caught. Perhaps we should add a test that starts a two-datanode DFS system?
> ClassCastException in DFSClient
> -------------------------------
>
> Key: HADOOP-329
> URL: http://issues.apache.org/jira/browse/HADOOP-329
> Project: Hadoop
> Type: Bug
> Components: dfs
> Versions: 0.4.0
> Environment: 400 node linux x86
> Reporter: Owen O'Malley
>
> I'm getting the following message back to my launching application:
> Exception in thread "main" org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.ClassCastException: org.apache.hadoop.dfs.DatanodeInfo cannot be cast to java.lang.Comparable
> at java.util.TreeMap.getEntry(TreeMap.java:325)
> at java.util.TreeMap.containsKey(TreeMap.java:209)
> at java.util.TreeSet.contains(TreeSet.java:217)
> at org.apache.hadoop.dfs.DFSClient.bestNode(DFSClient.java:373)
> at org.apache.hadoop.dfs.DFSClient.access$100(DFSClient.java:42)
> at org.apache.hadoop.dfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:520)
> at org.apache.hadoop.dfs.DFSClient$DFSInputStream.read(DFSClient.java:638)
> at org.apache.hadoop.fs.FSDataInputStream$PositionCache.read(FSDataInputStream.java:167)
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:313)
> at java.io.DataInputStream.readFully(DataInputStream.java:174)
> at java.io.DataInputStream.readFully(DataInputStream.java:150)
> at org.apache.hadoop.fs.FSDataInputStream$Checker.<init>(FSDataInputStream.java:55)
> at org.apache.hadoop.fs.FSDataInputStream.<init>(FSDataInputStream.java:237)
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:72)
> at org.apache.hadoop.dfs.DistributedFileSystem.copyToLocalFile(DistributedFileSystem.java:182)
> at org.apache.hadoop.mapred.JobInProgress.<init>(JobInProgress.java:83)
> at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:935)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:589)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:243)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:469)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:159)
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira