You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by Tali K <nc...@hotmail.com> on 2010/12/24 13:04:35 UTC
Help: Could not obtain block: blk_ Exception
Hi All,
I am getting Could not obtain block: blk_2706642997966533027_4482 file=/user/outputwc425729652_0/part-r-00000
I checked the file is actually there.
What I should do?
Please help.
Could not obtain block: blk_2706642997966533027_4482 file=/user/outputwc425729652_0/part-r-00000
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
at java.io.DataInputStream.read(DataInputStream.java:132)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
at java.io.InputStreamReader.read(InputStreamReader.java:167)
at java.io.BufferedReader.fill(BufferedReader.java:136)
at java.io.BufferedReader.readLine(BufferedReader.java:299)
at java.io.BufferedReader.readLine(BufferedReader.java:362)
at speeditup.ClusterByWordCountFSDriver$ClusterBasedOnWordCountMapper.map(ClusterByWordCountFSDriver.java:157)
at speeditup.ClusterByWordCountFSDriver$ClusterBasedOnWordCountMapper.map(ClusterByWordCountFSDriver.java:1)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Help: Could not obtain block: blk_ Exception
Posted by Tali K <nc...@hotmail.com>.
Hi All,
I am getting Could not obtain block: blk_2706642997966533027_4482 file=/user/outputwc425729652_0/part-r-00000
I checked the file is actually there.
What I should do?
Please help.
Could not obtain block: blk_2706642997966533027_4482 file=/user/outputwc425729652_0/part-r-00000
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
at java.io.DataInputStream.read(DataInputStream.java:132)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
at java.io.InputStreamReader.read(InputStreamReader.java:167)
at java.io.BufferedReader.fill(BufferedReader.java:136)
at java.io.BufferedReader.readLine(BufferedReader.java:299)
at java.io.BufferedReader.readLine(BufferedReader.java:362)
at speeditup.ClusterByWordCountFSDriver$ClusterBasedOnWordCountMapper.map(ClusterByWordCountFSDriver.java:157)
at speeditup.ClusterByWordCountFSDriver$ClusterBasedOnWordCountMapper.map(ClusterByWordCountFSDriver.java:1)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.Child.main(Child.java:170)