You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Chris Mawata <ch...@gmail.com> on 2014/12/05 14:41:33 UTC

When schedulers consider x% of resources what do they mean?

Hi all,
     when you divide up resources e.g. on CapacityScheduler or
FairScheduler etc., what does x% or resources mean? So, for example, a
guranteed 70% meant to indicate you can have up to
70% of the containers clusterwide irrespective of size of container
70% of the containers  on each node
or should it not be number of containers but sum of heap sizes?

Cheers
Chris Mawata

Re: When schedulers consider x% of resources what do they mean?

Posted by Chris Mawata <ch...@gmail.com>.
Thanks -- so this would be enforced at the ResourceManager level and not
the NodeManager level.

On Fri, Dec 5, 2014 at 2:22 PM, Vinod Kumar Vavilapalli <vi...@apache.org>
wrote:

>
> Resources can mean memory-only (by default) or memory + CPU etc across the
> _entire_ cluster.
>
> So 70% of cluster resources for a queue means that 70% of the total memory
> set for Hadoop in the cluster are available for all applications in that
> queue.
>
> Heap sizes are part of the memory requirements for each container.
>
> HTH
> +Vinod
>
> On Dec 5, 2014, at 5:41 AM, Chris Mawata <ch...@gmail.com> wrote:
>
> > Hi all,
> >      when you divide up resources e.g. on CapacityScheduler or
> FairScheduler etc., what does x% or resources mean? So, for example, a
> guranteed 70% meant to indicate you can have up to
> > 70% of the containers clusterwide irrespective of size of container
> > 70% of the containers  on each node
> > or should it not be number of containers but sum of heap sizes?
> >
> > Cheers
> > Chris Mawata
>
>

Re: When schedulers consider x% of resources what do they mean?

Posted by Chris Mawata <ch...@gmail.com>.
Thanks -- so this would be enforced at the ResourceManager level and not
the NodeManager level.

On Fri, Dec 5, 2014 at 2:22 PM, Vinod Kumar Vavilapalli <vi...@apache.org>
wrote:

>
> Resources can mean memory-only (by default) or memory + CPU etc across the
> _entire_ cluster.
>
> So 70% of cluster resources for a queue means that 70% of the total memory
> set for Hadoop in the cluster are available for all applications in that
> queue.
>
> Heap sizes are part of the memory requirements for each container.
>
> HTH
> +Vinod
>
> On Dec 5, 2014, at 5:41 AM, Chris Mawata <ch...@gmail.com> wrote:
>
> > Hi all,
> >      when you divide up resources e.g. on CapacityScheduler or
> FairScheduler etc., what does x% or resources mean? So, for example, a
> guranteed 70% meant to indicate you can have up to
> > 70% of the containers clusterwide irrespective of size of container
> > 70% of the containers  on each node
> > or should it not be number of containers but sum of heap sizes?
> >
> > Cheers
> > Chris Mawata
>
>

Re: When schedulers consider x% of resources what do they mean?

Posted by Chris Mawata <ch...@gmail.com>.
Thanks -- so this would be enforced at the ResourceManager level and not
the NodeManager level.

On Fri, Dec 5, 2014 at 2:22 PM, Vinod Kumar Vavilapalli <vi...@apache.org>
wrote:

>
> Resources can mean memory-only (by default) or memory + CPU etc across the
> _entire_ cluster.
>
> So 70% of cluster resources for a queue means that 70% of the total memory
> set for Hadoop in the cluster are available for all applications in that
> queue.
>
> Heap sizes are part of the memory requirements for each container.
>
> HTH
> +Vinod
>
> On Dec 5, 2014, at 5:41 AM, Chris Mawata <ch...@gmail.com> wrote:
>
> > Hi all,
> >      when you divide up resources e.g. on CapacityScheduler or
> FairScheduler etc., what does x% or resources mean? So, for example, a
> guranteed 70% meant to indicate you can have up to
> > 70% of the containers clusterwide irrespective of size of container
> > 70% of the containers  on each node
> > or should it not be number of containers but sum of heap sizes?
> >
> > Cheers
> > Chris Mawata
>
>

Re: When schedulers consider x% of resources what do they mean?

Posted by Chris Mawata <ch...@gmail.com>.
Thanks -- so this would be enforced at the ResourceManager level and not
the NodeManager level.

On Fri, Dec 5, 2014 at 2:22 PM, Vinod Kumar Vavilapalli <vi...@apache.org>
wrote:

>
> Resources can mean memory-only (by default) or memory + CPU etc across the
> _entire_ cluster.
>
> So 70% of cluster resources for a queue means that 70% of the total memory
> set for Hadoop in the cluster are available for all applications in that
> queue.
>
> Heap sizes are part of the memory requirements for each container.
>
> HTH
> +Vinod
>
> On Dec 5, 2014, at 5:41 AM, Chris Mawata <ch...@gmail.com> wrote:
>
> > Hi all,
> >      when you divide up resources e.g. on CapacityScheduler or
> FairScheduler etc., what does x% or resources mean? So, for example, a
> guranteed 70% meant to indicate you can have up to
> > 70% of the containers clusterwide irrespective of size of container
> > 70% of the containers  on each node
> > or should it not be number of containers but sum of heap sizes?
> >
> > Cheers
> > Chris Mawata
>
>

Re: When schedulers consider x% of resources what do they mean?

Posted by Vinod Kumar Vavilapalli <vi...@apache.org>.
Resources can mean memory-only (by default) or memory + CPU etc across the _entire_ cluster.

So 70% of cluster resources for a queue means that 70% of the total memory set for Hadoop in the cluster are available for all applications in that queue.

Heap sizes are part of the memory requirements for each container.

HTH
+Vinod

On Dec 5, 2014, at 5:41 AM, Chris Mawata <ch...@gmail.com> wrote:

> Hi all,
>      when you divide up resources e.g. on CapacityScheduler or FairScheduler etc., what does x% or resources mean? So, for example, a guranteed 70% meant to indicate you can have up to
> 70% of the containers clusterwide irrespective of size of container 
> 70% of the containers  on each node 
> or should it not be number of containers but sum of heap sizes?
> 
> Cheers
> Chris Mawata


Re: When schedulers consider x% of resources what do they mean?

Posted by Vinod Kumar Vavilapalli <vi...@apache.org>.
Resources can mean memory-only (by default) or memory + CPU etc across the _entire_ cluster.

So 70% of cluster resources for a queue means that 70% of the total memory set for Hadoop in the cluster are available for all applications in that queue.

Heap sizes are part of the memory requirements for each container.

HTH
+Vinod

On Dec 5, 2014, at 5:41 AM, Chris Mawata <ch...@gmail.com> wrote:

> Hi all,
>      when you divide up resources e.g. on CapacityScheduler or FairScheduler etc., what does x% or resources mean? So, for example, a guranteed 70% meant to indicate you can have up to
> 70% of the containers clusterwide irrespective of size of container 
> 70% of the containers  on each node 
> or should it not be number of containers but sum of heap sizes?
> 
> Cheers
> Chris Mawata


Re: When schedulers consider x% of resources what do they mean?

Posted by Vinod Kumar Vavilapalli <vi...@apache.org>.
Resources can mean memory-only (by default) or memory + CPU etc across the _entire_ cluster.

So 70% of cluster resources for a queue means that 70% of the total memory set for Hadoop in the cluster are available for all applications in that queue.

Heap sizes are part of the memory requirements for each container.

HTH
+Vinod

On Dec 5, 2014, at 5:41 AM, Chris Mawata <ch...@gmail.com> wrote:

> Hi all,
>      when you divide up resources e.g. on CapacityScheduler or FairScheduler etc., what does x% or resources mean? So, for example, a guranteed 70% meant to indicate you can have up to
> 70% of the containers clusterwide irrespective of size of container 
> 70% of the containers  on each node 
> or should it not be number of containers but sum of heap sizes?
> 
> Cheers
> Chris Mawata


Re: When schedulers consider x% of resources what do they mean?

Posted by Vinod Kumar Vavilapalli <vi...@apache.org>.
Resources can mean memory-only (by default) or memory + CPU etc across the _entire_ cluster.

So 70% of cluster resources for a queue means that 70% of the total memory set for Hadoop in the cluster are available for all applications in that queue.

Heap sizes are part of the memory requirements for each container.

HTH
+Vinod

On Dec 5, 2014, at 5:41 AM, Chris Mawata <ch...@gmail.com> wrote:

> Hi all,
>      when you divide up resources e.g. on CapacityScheduler or FairScheduler etc., what does x% or resources mean? So, for example, a guranteed 70% meant to indicate you can have up to
> 70% of the containers clusterwide irrespective of size of container 
> 70% of the containers  on each node 
> or should it not be number of containers but sum of heap sizes?
> 
> Cheers
> Chris Mawata