You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by "Omer, Farah" <fo...@microstrategy.com> on 2010/06/25 20:58:59 UTC
Hive server error: Could not get block locations. Aborting
Hi All,
Today I was running a big set of reports using HIVE(trunk version) and I
ran into the following problem.
The reports start running but after a while they all start failing and I
see that the Hive server has shut down.
I see this message on the Hive CLI:
10/06/25 09:22:08 WARN dfs.DFSClient: Error Recovery for block null bad
datanode[0]
java.io.IOException: Could not get block locations. Aborting...
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFS
Client.java:2153)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.ja
va:1745)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClie
nt.java:1899)
Job Submission failed with exception 'java.io.IOException(Could not get
block locations. Aborting...)'
10/06/25 09:22:08 ERROR exec.ExecDriver: Job Submission failed with
exception 'java.io.IOException(Could not get block locations.
Aborting...)'
java.io.IOException: Could not get block locations. Aborting...
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFS
Client.java:2153)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.ja
va:1745)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClie
nt.java:1899)
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.ExecDriver
10/06/25 09:22:08 ERROR ql.Driver: FAILED: Execution Error, return code
1 from org.apache.hadoop.hive.ql.exec.ExecDriver
Exception closing file
/var/lib/hadoop/cache/hadoop/mapred/system/job_201006111052_4529/job.jar
java.io.IOException: Could not get block locations. Aborting...
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFS
Client.java:2153)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.ja
va:1745)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClie
nt.java:1899)
Exception closing file
/tmp/hive-training/hive_2010-06-25_09-21-04_517_4337603327678713321/plan
.859178065
java.io.IOException: Could not get block locations. Aborting...
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFS
Client.java:2153)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.ja
va:1745)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClie
nt.java:1899)
training@training-vm:~/hivetrunk/hive/build/dist/bin$
There is an error message above of "Could not get block locations.
Aborting...".
This didn't use to happen with the previous version of Hive that I used
before. The complete set of reports used to finish executing one after
another, without any server shutdown messages.
I looked a bit into the Hive Mail archive, and I see that one work
around would be increasing the fd limit.
Can someone tell me what exactly might be reason for this kind of error
message, and which setting can I change to work-around this, and where
can I find it? Please let me know if there is some other value or file I
should send along for this communication to make more sense.
Thanks very much for your help.
Farah Omer
Senior DB Engineer, MicroStrategy, Inc.
T: 703 2702230
E: fomer@microstrategy.com <ma...@microstrategy.com>
http://www.microstrategy.com <http://www.microstrategy.com>