You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Dave Brondsema (JIRA)" <ji...@apache.org> on 2010/08/26 17:03:56 UTC
[jira] Commented: (HIVE-1019) java.io.FileNotFoundException:
HIVE_PLAN (No such file or directory)
[ https://issues.apache.org/jira/browse/HIVE-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12902888#action_12902888 ]
Dave Brondsema commented on HIVE-1019:
--------------------------------------
I'm getting this error with the simplest of queries, on 0.5.0 release and a recent build of the 0.6.0 SVN branch.
I have an empty table 'test'.
{noformat}
hive> select * from test;
OK
Time taken: 2.916 seconds
hive> select count(1) from test;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_201002192051_14077, Tracking URL = http://hadoop-namenode-1.v39.ch3.sourceforge.com:50030/jobdetails.jsp?jobid=job_201002192051_14077
Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=hadoop-namenode-1.v39.ch3.sourceforge.com:8021 -kill job_201002192051_14077
2010-08-26 14:53:11,348 Stage-1 map = 0%, reduce = 0%
2010-08-26 14:53:21,413 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201002192051_14077 with errors
Failed tasks with most(2) failures :
Task URL: http://hadoop-namenode-1.v39.ch3.sourceforge.com:50030/taskdetails.jsp?jobid=job_201002192051_14077&tipid=task_201002192051_14077_m_000000
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.ExecDriver
{noformat}
If I check the job tracker URL it shows:
{noformat}
2010-08-26 14:41:05,789 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
2010-08-26 14:41:05,865 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 1
2010-08-26 14:41:05,881 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 100
2010-08-26 14:41:05,929 INFO org.apache.hadoop.mapred.MapTask: data buffer = 79691776/99614720
2010-08-26 14:41:05,930 INFO org.apache.hadoop.mapred.MapTask: record buffer = 262144/327680
2010-08-26 14:41:05,988 WARN org.apache.hadoop.mapred.TaskTracker: Error running child
java.lang.RuntimeException: java.io.FileNotFoundException: HIVE_PLAN (No such file or directory)
at org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:110)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:244)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:208)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:219)
at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2210)
Caused by: java.io.FileNotFoundException: HIVE_PLAN (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:106)
at java.io.FileInputStream.<init>(FileInputStream.java:66)
at org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:101)
... 4 more
{noformat}
hive.exec.parallel is false, and there are no other jobs running
> java.io.FileNotFoundException: HIVE_PLAN (No such file or directory)
> --------------------------------------------------------------------
>
> Key: HIVE-1019
> URL: https://issues.apache.org/jira/browse/HIVE-1019
> Project: Hadoop Hive
> Issue Type: Bug
> Affects Versions: 0.6.0
> Reporter: Bennie Schut
> Assignee: Bennie Schut
> Priority: Minor
> Fix For: 0.7.0
>
> Attachments: HIVE-1019-1.patch, HIVE-1019-2.patch, HIVE-1019-3.patch, HIVE-1019-4.patch, HIVE-1019-5.patch, HIVE-1019-6.patch, HIVE-1019-7.patch, HIVE-1019-8.patch, HIVE-1019.patch, stacktrace2.txt
>
>
> I keep getting errors like this:
> java.io.FileNotFoundException: HIVE_PLAN (No such file or directory)
> and :
> java.io.IOException: cannot find dir = hdfs://victoria.ebuddy.com:9000/tmp/hive-dwh/801467596/10002 in partToPartitionInfo!
> when running multiple threads with roughly similar queries.
> I have a patch for this which works for me.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.