You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by Krishna Rao <kr...@gmail.com> on 2012/12/31 16:45:54 UTC
(Unknown)
A particular query that I run fails with the following error:
***
Job 18: Map: 2 Reduce: 1 Cumulative CPU: 3.67 sec HDFS Read: 0 HDFS
Write: 0 SUCCESS
Exception in thread "main"
org.apache.hadoop.mapreduce.counters.LimitExceededException: Too many
counters: 121 max=120
...
***
Googling suggests that I should increase "mapreduce.job.counters.limit".
And that the number of counters a job uses
has an effect on the memory used by the JobTracker, so I shouldn't increase
this number too high.
Is there a rule of thumb for what this number should be as a function of
JobTracker memory? That is should I be cautious and
increase by 5 at a time, or could I just double it?
Cheers,
Krishna
Re: Hive shows counters limit exeed
Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Hi,
Please read the netiquette to post on mailinglists. First good start is a Subject.
http://mapredit.blogspot.de/2012/12/hive-query-error-too-many-counters.html
- Alex
On Dec 31, 2012, at 4:45 PM, Krishna Rao <kr...@gmail.com> wrote:
> A particular query that I run fails with the following error:
>
> ***
> Job 18: Map: 2 Reduce: 1 Cumulative CPU: 3.67 sec HDFS Read: 0 HDFS
> Write: 0 SUCCESS
> Exception in thread "main"
> org.apache.hadoop.mapreduce.counters.LimitExceededException: Too many
> counters: 121 max=120
> ...
> ***
>
> Googling suggests that I should increase "mapreduce.job.counters.limit".
> And that the number of counters a job uses
> has an effect on the memory used by the JobTracker, so I shouldn't increase
> this number too high.
>
> Is there a rule of thumb for what this number should be as a function of
> JobTracker memory? That is should I be cautious and
> increase by 5 at a time, or could I just double it?
>
> Cheers,
>
> Krishna
--
Alexander Alten-Lorenz
http://mapredit.blogspot.com
German Hadoop LinkedIn Group: http://goo.gl/N8pCF