You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-issues@hadoop.apache.org by "Allen Wittenauer (JIRA)" <ji...@apache.org> on 2015/01/30 18:04:35 UTC

[jira] [Commented] (YARN-3119) Memory limit check need not be enforced unless aggregate usage of all containers is near limit

    [ https://issues.apache.org/jira/browse/YARN-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14298876#comment-14298876 ] 

Allen Wittenauer commented on YARN-3119:
----------------------------------------

How should scheduling behave in this scenario?  What happens if multiple containers are over their limit and/or what order are containers killed?  

> Memory limit check need not be enforced unless aggregate usage of all containers is near limit
> ----------------------------------------------------------------------------------------------
>
>                 Key: YARN-3119
>                 URL: https://issues.apache.org/jira/browse/YARN-3119
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: nodemanager
>            Reporter: Anubhav Dhoot
>            Assignee: Anubhav Dhoot
>         Attachments: YARN-3119.prelim.patch
>
>
> Today we kill any container preemptively even if the total usage of containers for that is well within the limit for YARN. Instead if we enforce memory limit only if the total limit of all containers is close to some configurable ratio of overall memory assigned to containers, we can allow for flexibility in container memory usage without adverse effects. This is similar in principle to how cgroups uses soft_limit_in_bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)