You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Todd Lipcon (Updated) (JIRA)" <ji...@apache.org> on 2011/10/19 20:49:12 UTC

[jira] [Updated] (MAPREDUCE-3205) MR2 memory limits should be pmem, not vmem

     [ https://issues.apache.org/jira/browse/MAPREDUCE-3205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Todd Lipcon updated MAPREDUCE-3205:
-----------------------------------

    Target Version/s: 0.23.0
        Hadoop Flags: Incompatible change
              Status: Patch Available  (was: Open)
    
> MR2 memory limits should be pmem, not vmem
> ------------------------------------------
>
>                 Key: MAPREDUCE-3205
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3205
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: mrv2, nodemanager
>    Affects Versions: 0.23.0
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: mr-3205.txt
>
>
> Currently, the memory resources requested for a container limit the amount of virtual memory used by the container. On my test clusters, at least, Java processes take up nearly twice as much vmem as pmem - a Java process running with -Xmx500m uses 935m of vmem and only about 560m of pmem.
> This will force admins to either under-utilize available physical memory, or oversubscribe it by configuring the available resources on a TT to be larger than the true amount of physical RAM.
> Instead, I would propose that the resource limit apply to pmem, and allow the admin to configure a "vmem overcommit ratio" which sets the vmem limit as a function of pmem limit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira