You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Rui Shi <sh...@yahoo.com> on 2007/12/15 03:03:46 UTC

How can the reducer be invoked lazily?

Hi,

How can we specify so that the reducers can be invoked lazily? For instance, I know there are no partitions in the range of 200-300. How can I let the hadoop know that no need to invoke reduce tasks for those partitions?

Thanks,

Rui



      ____________________________________________________________________________________
Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

Re: How can the reducer be invoked lazily?

Posted by Ted Dunning <td...@veoh.com>.
Devaraj is correct that there is no mechanism to create reduce tasks only as
necessary, but remember that each reducer does many reductions.  This means
that empty ranges rarely have a large, unbalanced effect.

If this is still a problem you can do two things,

- first, you can use the hash of the real key (put the real key in the
value).  That will cause empty ranges to be spread all over hither and yon,
giving you the balance you seek (this behavior may actually be the default).

- secondly, you can use lots of reducers.  If the number of reducers is
large, then the lost resources due to empty ranges will be small since each
reducer is doing very little work.  If the number of reducers exceeds the
number of available tasks, then you get even better balancing because
machines that do empty ranges (quickly) will ask more more work.

- conversely, you can use just a few reducers.  This way the empty ranges
will only be a small part of any given reducers workload.

Do you have evidence that this is a real problem?


On 12/16/07 4:31 AM, "Devaraj Das" <dd...@yahoo-inc.com> wrote:

> This is not possible. The framework always creates reduce tasks from 0 -
> num_reduces. 
> 
>> -----Original Message-----
>> From: Rui Shi [mailto:shearershot@yahoo.com]
>> Sent: Saturday, December 15, 2007 7:34 AM
>> To: hadoop-user@lucene.apache.org
>> Subject: How can the reducer be invoked lazily?
>> 
>> Hi,
>> 
>> How can we specify so that the reducers can be invoked
>> lazily? For instance, I know there are no partitions in the
>> range of 200-300. How can I let the hadoop know that no need
>> to invoke reduce tasks for those partitions?
>> 
>> Thanks,
>> 
>> Rui
>> 
>> 
>> 
>>       
>> ______________________________________________________________
>> ______________________
>> Be a better friend, newshound, and
>> know-it-all with Yahoo! Mobile.  Try it now.
>> http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
>> 
> 


RE: How can the reducer be invoked lazily?

Posted by Devaraj Das <dd...@yahoo-inc.com>.
This is not possible. The framework always creates reduce tasks from 0 -
num_reduces. 

> -----Original Message-----
> From: Rui Shi [mailto:shearershot@yahoo.com] 
> Sent: Saturday, December 15, 2007 7:34 AM
> To: hadoop-user@lucene.apache.org
> Subject: How can the reducer be invoked lazily?
> 
> Hi,
> 
> How can we specify so that the reducers can be invoked 
> lazily? For instance, I know there are no partitions in the 
> range of 200-300. How can I let the hadoop know that no need 
> to invoke reduce tasks for those partitions?
> 
> Thanks,
> 
> Rui
> 
> 
> 
>       
> ______________________________________________________________
> ______________________
> Be a better friend, newshound, and
> know-it-all with Yahoo! Mobile.  Try it now.  
> http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 
>