You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-dev@hadoop.apache.org by Arun Murthy <ac...@hortonworks.com> on 2011/07/22 18:50:22 UTC

Re: max concurrent mapper/reducer in hadoop

Moving to mapreduce-dev@, bcc general@.

Yes, as described in the bug, the CS has high-ram jobs which is a
better model for shared multi-tenant clusters. The hadoop-0.20.203
release from Apache has the most current and tested version of the
CapacityScheduler.

Arun

Sent from my iPhone

On Jul 22, 2011, at 9:36 AM, Liang Chenmin <ch...@cs.cmu.edu> wrote:

> Hi all,
>    I am using hadoop 0.20.2 cdh3 version. The old method to set max
> concurrent mapper/reducer in code no longer works. I saw a patch about this,
> but the current status is "won't fixed". Is there any update about this? I
> am using Fair Scheduler, should I use Capacity Scheduler instead?
> https://issues.apache.org/jira/browse/HADOOP-5170
>
> Thanks,
> chenmin liang