You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by shaik ahamed <sh...@gmail.com> on 2012/07/06 13:39:26 UTC

hi all

*Hi users,*
**
*              As im selecting the distinct column from the vender Hive
table *
**
*Im getting the below error plz help me in this*
**
*hive> select distinct supplier from vender_sample;*

Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Kill Command = /usr/local/hadoop/bin/../bin/hadoop job
-Dmapred.job.tracker=md-trngpoc1:54311 -kill job_201207061535_0005
Hadoop job information for Stage-1: number of mappers: 1; number of
reducers: 1
2012-07-06 17:03:13,978 Stage-1 map = 0%,  reduce = 0%
2012-07-06 17:03:20,001 Stage-1 map = 100%,  reduce = 0%
2012-07-06 17:04:20,248 Stage-1 map = 100%,  reduce = 0%
2012-07-06 17:04:23,262 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201207061535_0005 with errors
Error during job, obtaining debugging information...
Examining task ID: task_201207061535_0005_m_000002 (and more) from job
job_201207061535_0005

Task with the most failures(4):
-----
Task ID:
  task_201207061535_0005_r_000000
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1  Reduce: 1   HDFS Read: 99143041 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

Regards
shaik.

Re: hi all

Posted by Nitin Pawar <ni...@gmail.com>.
can you tell us
1) how many nodes are there in the cluster?
2) is there any connectivity problems if the # nodes > 3
3) if you have just one slave do you have a higher replication factor?
4) what is the compression you are using for the tables?
5) if you have a dhcp based network, did your slave machines changed the ip?

Thanks,
Nitin

On Fri, Jul 6, 2012 at 6:17 PM, shaik ahamed <sh...@gmail.com> wrote:

> Hi ,
>
>     Below is the error,i found in the Job Tracker log file :
>
>
> *Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out*
>
> Please help me in this ...
>
> *Thanks in Advance*
>
> *Shaik.*
>
>
> On Fri, Jul 6, 2012 at 5:22 PM, Bejoy KS <be...@yahoo.com> wrote:
>
>> **
>> Hi Shaik
>>
>> There is some error while MR jobs are running. To get the root cause
>> please post in the error log from the failed task.
>>
>> You can browse the Job Tracker web UI and choose the right job Id and
>> drill down to the failed tasks to get the error logs.
>> Regards
>> Bejoy KS
>>
>> Sent from handheld, please excuse typos.
>> ------------------------------
>> *From: *shaik ahamed <sh...@gmail.com>
>> *Date: *Fri, 6 Jul 2012 17:09:26 +0530
>> *To: *<us...@hive.apache.org>
>> *ReplyTo: *user@hive.apache.org
>> *Subject: *hi all
>>
>> *Hi users,*
>> **
>> *              As im selecting the distinct column from the vender Hive
>> table *
>> **
>> *Im getting the below error plz help me in this*
>> **
>> *hive> select distinct supplier from vender_sample;*
>>
>> Total MapReduce jobs = 1
>> Launching Job 1 out of 1
>> Number of reduce tasks not specified. Estimated from input data size: 1
>> In order to change the average load for a reducer (in bytes):
>>   set hive.exec.reducers.bytes.per.reducer=<number>
>> In order to limit the maximum number of reducers:
>>   set hive.exec.reducers.max=<number>
>> In order to set a constant number of reducers:
>>   set mapred.reduce.tasks=<number>
>> Kill Command = /usr/local/hadoop/bin/../bin/hadoop job
>> -Dmapred.job.tracker=md-trngpoc1:54311 -kill job_201207061535_0005
>> Hadoop job information for Stage-1: number of mappers: 1; number of
>> reducers: 1
>> 2012-07-06 17:03:13,978 Stage-1 map = 0%,  reduce = 0%
>> 2012-07-06 17:03:20,001 Stage-1 map = 100%,  reduce = 0%
>> 2012-07-06 17:04:20,248 Stage-1 map = 100%,  reduce = 0%
>> 2012-07-06 17:04:23,262 Stage-1 map = 100%,  reduce = 100%
>> Ended Job = job_201207061535_0005 with errors
>> Error during job, obtaining debugging information...
>> Examining task ID: task_201207061535_0005_m_000002 (and more) from job
>> job_201207061535_0005
>>
>> Task with the most failures(4):
>> -----
>> Task ID:
>>   task_201207061535_0005_r_000000
>> FAILED: Execution Error, return code 2 from
>> org.apache.hadoop.hive.ql.exec.MapRedTask
>> MapReduce Jobs Launched:
>> Job 0: Map: 1  Reduce: 1   HDFS Read: 99143041 HDFS Write: 0 FAIL
>> Total MapReduce CPU Time Spent: 0 msec
>>
>> Regards
>> shaik.
>>
>
>


-- 
Nitin Pawar

Re: hi all

Posted by shaik ahamed <sh...@gmail.com>.
Hi ,

    Below is the error,i found in the Job Tracker log file :


*Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out*

Please help me in this ...

*Thanks in Advance*

*Shaik.*


On Fri, Jul 6, 2012 at 5:22 PM, Bejoy KS <be...@yahoo.com> wrote:

> **
> Hi Shaik
>
> There is some error while MR jobs are running. To get the root cause
> please post in the error log from the failed task.
>
> You can browse the Job Tracker web UI and choose the right job Id and
> drill down to the failed tasks to get the error logs.
> Regards
> Bejoy KS
>
> Sent from handheld, please excuse typos.
> ------------------------------
> *From: *shaik ahamed <sh...@gmail.com>
> *Date: *Fri, 6 Jul 2012 17:09:26 +0530
> *To: *<us...@hive.apache.org>
> *ReplyTo: *user@hive.apache.org
> *Subject: *hi all
>
> *Hi users,*
> **
> *              As im selecting the distinct column from the vender Hive
> table *
> **
> *Im getting the below error plz help me in this*
> **
> *hive> select distinct supplier from vender_sample;*
>
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks not specified. Estimated from input data size: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> Kill Command = /usr/local/hadoop/bin/../bin/hadoop job
> -Dmapred.job.tracker=md-trngpoc1:54311 -kill job_201207061535_0005
> Hadoop job information for Stage-1: number of mappers: 1; number of
> reducers: 1
> 2012-07-06 17:03:13,978 Stage-1 map = 0%,  reduce = 0%
> 2012-07-06 17:03:20,001 Stage-1 map = 100%,  reduce = 0%
> 2012-07-06 17:04:20,248 Stage-1 map = 100%,  reduce = 0%
> 2012-07-06 17:04:23,262 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201207061535_0005 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_201207061535_0005_m_000002 (and more) from job
> job_201207061535_0005
>
> Task with the most failures(4):
> -----
> Task ID:
>   task_201207061535_0005_r_000000
> FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.MapRedTask
> MapReduce Jobs Launched:
> Job 0: Map: 1  Reduce: 1   HDFS Read: 99143041 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
>
> Regards
> shaik.
>

Re: hi all

Posted by Bejoy KS <be...@yahoo.com>.
Hi Shaik

There is some error while MR jobs are running. To get the root cause please post in the  error log from the failed task.

You can browse the Job Tracker web UI and choose the right job Id and drill down to the failed tasks to get the error logs.

Regards
Bejoy KS

Sent from handheld, please excuse typos.

-----Original Message-----
From: shaik ahamed <sh...@gmail.com>
Date: Fri, 6 Jul 2012 17:09:26 
To: <us...@hive.apache.org>
Reply-To: user@hive.apache.org
Subject: hi all

*Hi users,*
**
*              As im selecting the distinct column from the vender Hive
table *
**
*Im getting the below error plz help me in this*
**
*hive> select distinct supplier from vender_sample;*

Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Kill Command = /usr/local/hadoop/bin/../bin/hadoop job
-Dmapred.job.tracker=md-trngpoc1:54311 -kill job_201207061535_0005
Hadoop job information for Stage-1: number of mappers: 1; number of
reducers: 1
2012-07-06 17:03:13,978 Stage-1 map = 0%,  reduce = 0%
2012-07-06 17:03:20,001 Stage-1 map = 100%,  reduce = 0%
2012-07-06 17:04:20,248 Stage-1 map = 100%,  reduce = 0%
2012-07-06 17:04:23,262 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201207061535_0005 with errors
Error during job, obtaining debugging information...
Examining task ID: task_201207061535_0005_m_000002 (and more) from job
job_201207061535_0005

Task with the most failures(4):
-----
Task ID:
  task_201207061535_0005_r_000000
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1  Reduce: 1   HDFS Read: 99143041 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

Regards
shaik.