You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by 闫昆 <ya...@gmail.com> on 2013/08/22 04:15:28 UTC

hive query error

hi all
when exec hive query throw exception as follow
I donnot know where is error log I found $HIVE_HOME/ logs not exist

Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 3
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Cannot run job locally: Input Size (= 2304882371) is larger than
hive.exec.mode.local.auto.inputbytes.max (= 134217728)
Starting Job = job_1377137178318_0001, Tracking URL =
http://hydra0001:8088/proxy/application_1377137178318_0001/
Kill Command = /opt/module/hadoop-2.0.0-cdh4.3.0/bin/hadoop job  -kill
job_1377137178318_0001
Hadoop job information for Stage-1: number of mappers: 18; number of
reducers: 3
2013-08-22 10:07:49,654 Stage-1 map = 0%,  reduce = 0%
2013-08-22 10:08:05,544 Stage-1 map = 6%,  reduce = 0%
2013-08-22 10:08:07,289 Stage-1 map = 0%,  reduce = 0%
2013-08-22 10:08:58,217 Stage-1 map = 28%,  reduce = 0%
2013-08-22 10:09:07,210 Stage-1 map = 22%,  reduce = 0%
Ended Job = job_1377137178318_0001 with errors
Error during job, obtaining debugging information...
null
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 18  Reduce: 3   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
-- 

In the Hadoop world, I am just a novice, explore the entire Hadoop
ecosystem, I hope one day I can contribute their own code

YanBit
yankunhadoop@gmail.com

Re: hive query error

Posted by 闫昆 <ya...@gmail.com>.
thanks Bing I found it


2013/8/22 Bing Li <sa...@gmail.com>

> By default, hive.log should exist in /tmp/<user_name>.
> Also, it could be set in $HIVE_HOME/conf/hive-log4j.properties and
> hive-exec-log4j.properties
> - hive.log.dir
> - hive.log.file
>
>
> 2013/8/22 闫昆 <ya...@gmail.com>
>
>> hi all
>> when exec hive query throw exception as follow
>> I donnot know where is error log I found $HIVE_HOME/ logs not exist
>>
>> Total MapReduce jobs = 1
>> Launching Job 1 out of 1
>> Number of reduce tasks not specified. Estimated from input data size: 3
>> In order to change the average load for a reducer (in bytes):
>>   set hive.exec.reducers.bytes.per.reducer=<number>
>> In order to limit the maximum number of reducers:
>>   set hive.exec.reducers.max=<number>
>> In order to set a constant number of reducers:
>>   set mapred.reduce.tasks=<number>
>> Cannot run job locally: Input Size (= 2304882371) is larger than
>> hive.exec.mode.local.auto.inputbytes.max (= 134217728)
>> Starting Job = job_1377137178318_0001, Tracking URL =
>> http://hydra0001:8088/proxy/application_1377137178318_0001/
>> Kill Command = /opt/module/hadoop-2.0.0-cdh4.3.0/bin/hadoop job  -kill
>> job_1377137178318_0001
>> Hadoop job information for Stage-1: number of mappers: 18; number of
>> reducers: 3
>> 2013-08-22 10:07:49,654 Stage-1 map = 0%,  reduce = 0%
>> 2013-08-22 10:08:05,544 Stage-1 map = 6%,  reduce = 0%
>> 2013-08-22 10:08:07,289 Stage-1 map = 0%,  reduce = 0%
>> 2013-08-22 10:08:58,217 Stage-1 map = 28%,  reduce = 0%
>> 2013-08-22 10:09:07,210 Stage-1 map = 22%,  reduce = 0%
>> Ended Job = job_1377137178318_0001 with errors
>> Error during job, obtaining debugging information...
>> null
>> FAILED: Execution Error, return code 2 from
>> org.apache.hadoop.hive.ql.exec.MapRedTask
>> MapReduce Jobs Launched:
>> Job 0: Map: 18  Reduce: 3   HDFS Read: 0 HDFS Write: 0 FAIL
>> Total MapReduce CPU Time Spent: 0 msec
>> --
>>
>> In the Hadoop world, I am just a novice, explore the entire Hadoop
>> ecosystem, I hope one day I can contribute their own code
>>
>> YanBit
>> yankunhadoop@gmail.com
>>
>>
>


-- 

In the Hadoop world, I am just a novice, explore the entire Hadoop
ecosystem, I hope one day I can contribute their own code

YanBit
yankunhadoop@gmail.com

Re: hive query error

Posted by Bing Li <sa...@gmail.com>.
By default, hive.log should exist in /tmp/<user_name>.
Also, it could be set in $HIVE_HOME/conf/hive-log4j.properties and
hive-exec-log4j.properties
- hive.log.dir
- hive.log.file


2013/8/22 闫昆 <ya...@gmail.com>

> hi all
> when exec hive query throw exception as follow
> I donnot know where is error log I found $HIVE_HOME/ logs not exist
>
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks not specified. Estimated from input data size: 3
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> Cannot run job locally: Input Size (= 2304882371) is larger than
> hive.exec.mode.local.auto.inputbytes.max (= 134217728)
> Starting Job = job_1377137178318_0001, Tracking URL =
> http://hydra0001:8088/proxy/application_1377137178318_0001/
> Kill Command = /opt/module/hadoop-2.0.0-cdh4.3.0/bin/hadoop job  -kill
> job_1377137178318_0001
> Hadoop job information for Stage-1: number of mappers: 18; number of
> reducers: 3
> 2013-08-22 10:07:49,654 Stage-1 map = 0%,  reduce = 0%
> 2013-08-22 10:08:05,544 Stage-1 map = 6%,  reduce = 0%
> 2013-08-22 10:08:07,289 Stage-1 map = 0%,  reduce = 0%
> 2013-08-22 10:08:58,217 Stage-1 map = 28%,  reduce = 0%
> 2013-08-22 10:09:07,210 Stage-1 map = 22%,  reduce = 0%
> Ended Job = job_1377137178318_0001 with errors
> Error during job, obtaining debugging information...
> null
> FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.MapRedTask
> MapReduce Jobs Launched:
> Job 0: Map: 18  Reduce: 3   HDFS Read: 0 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
> --
>
> In the Hadoop world, I am just a novice, explore the entire Hadoop
> ecosystem, I hope one day I can contribute their own code
>
> YanBit
> yankunhadoop@gmail.com
>
>