You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@giraph.apache.org by Michał Szynkiewicz <sz...@gmail.com> on 2015/03/01 23:42:00 UTC

SccCompitationTestInMemory - LimitExceededException

Hi,

I'm trying to run SccComputationTestInMemory and I'm
hitting org.apache.hadoop.mapreduce.counters.LimitExceededException: Too
many counters: 121 max=120

I tried adding both
conf.set("mapreduce.job.counters.max", Integer.toString(1024));
and
conf.set("mapreduce.job.counters.limit", Integer.toString(1024));
at the begging of the test, but none of these changed the limit of counters.

I tried -Phadoop_2 with hadoop.version=2.6.0 and 2.5.1, -Phadoop_1 with
1.2.1, -Phadoop_0.20.203.

How can I run this test successfully?

Thanks

Michał

Re: SccCompitationTestInMemory - LimitExceededException

Posted by Young Han <yo...@uwaterloo.ca>.
This seems like the known problem with MapReduce counters. Try adding the
following to your hadoop-*/conf/mapred-site.xml:

  <property>
    <name>mapreduce.job.counters.max</name>
    <value>1000000</value>
  </property>
  <property>
    <name>mapreduce.job.counters.limit</name>
    <value>1000000</value>
  </property>

This does the trick for me on Hadoop 1.0.4, and should work for 0.20 as
well. Not sure about YARN.

Young

On Wed, Mar 11, 2015 at 6:28 AM, Michał Szynkiewicz <sz...@gmail.com>
wrote:

> I was able to increase the counters limit with: Counters.MAX_COUNTER_LIMIT
> = 2024 (works for hadoop_1 and hadoop 1.2.1).
>
> Then it turned out that whatever limit I set, it is always exceeded.
>
> It turned out that for some reason IntOverwriteAggregator that
> SccPhaseMasterCompute uses to propagate algorithm phase didn't work as
> expected. When read from "copmutations" it had correct value, while read
> from master computation it returned the old value.
>
> I am writing a similar test, where the value to be passed only increases
> and I was able to work around this issue by using Max instead of Overwrite
> aggregator.
>
> Note that I didn't tried to run it yet, these are just results from unit
> tests.
>
> btw, I'm using release-1.1.0
>
>
>
>
> 2015-03-01 23:42 GMT+01:00 Michał Szynkiewicz <sz...@gmail.com>:
>
>> Hi,
>>
>> I'm trying to run SccComputationTestInMemory and I'm
>> hitting org.apache.hadoop.mapreduce.counters.LimitExceededException: Too
>> many counters: 121 max=120
>>
>> I tried adding both
>> conf.set("mapreduce.job.counters.max", Integer.toString(1024));
>> and
>> conf.set("mapreduce.job.counters.limit", Integer.toString(1024));
>> at the begging of the test, but none of these changed the limit of
>> counters.
>>
>> I tried -Phadoop_2 with hadoop.version=2.6.0 and 2.5.1, -Phadoop_1 with
>> 1.2.1, -Phadoop_0.20.203.
>>
>> How can I run this test successfully?
>>
>> Thanks
>>
>> Michał
>>
>
>

Re: SccCompitationTestInMemory - LimitExceededException

Posted by Michał Szynkiewicz <sz...@gmail.com>.
I was able to increase the counters limit with: Counters.MAX_COUNTER_LIMIT
= 2024 (works for hadoop_1 and hadoop 1.2.1).

Then it turned out that whatever limit I set, it is always exceeded.

It turned out that for some reason IntOverwriteAggregator that
SccPhaseMasterCompute uses to propagate algorithm phase didn't work as
expected. When read from "copmutations" it had correct value, while read
from master computation it returned the old value.

I am writing a similar test, where the value to be passed only increases
and I was able to work around this issue by using Max instead of Overwrite
aggregator.

Note that I didn't tried to run it yet, these are just results from unit
tests.

btw, I'm using release-1.1.0




2015-03-01 23:42 GMT+01:00 Michał Szynkiewicz <sz...@gmail.com>:

> Hi,
>
> I'm trying to run SccComputationTestInMemory and I'm
> hitting org.apache.hadoop.mapreduce.counters.LimitExceededException: Too
> many counters: 121 max=120
>
> I tried adding both
> conf.set("mapreduce.job.counters.max", Integer.toString(1024));
> and
> conf.set("mapreduce.job.counters.limit", Integer.toString(1024));
> at the begging of the test, but none of these changed the limit of
> counters.
>
> I tried -Phadoop_2 with hadoop.version=2.6.0 and 2.5.1, -Phadoop_1 with
> 1.2.1, -Phadoop_0.20.203.
>
> How can I run this test successfully?
>
> Thanks
>
> Michał
>