You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@bigtop.apache.org by "Olaf Flebbe (JIRA)" <ji...@apache.org> on 2017/01/06 21:47:58 UTC

[jira] [Created] (BIGTOP-2663) puppet hadoop module: Consolidate memory resource settings

Olaf Flebbe created BIGTOP-2663:
-----------------------------------

             Summary: puppet hadoop module: Consolidate memory resource settings 
                 Key: BIGTOP-2663
                 URL: https://issues.apache.org/jira/browse/BIGTOP-2663
             Project: Bigtop
          Issue Type: Bug
    Affects Versions: 1.1.0
            Reporter: Olaf Flebbe
            Assignee: Olaf Flebbe
             Fix For: 1.2.0


The memory resource settings for hadoop are outdated.

Now the settings in mapred-site.xml should be used

{code}
mapreduce.map.java.opts
mapreduce.reduce.java.opts
{code}

These are set now to {{-Xmx1024m}} (This was hardcoded before)

Additionally one can now optionally set the maxmimum (resident) memory
for map and reduce jobs

{code}
mapreduce.map.memory.mb
mapreduce.reduce.memory.mb
{code}

And last but not least, will set {{yarn.nodemanager.vmem-pmem-ratio}} to 100:

There is the public misconception that virtual memory is a limiting resource.
That's only the case for 32Bit Adress spaces, not anymore. 

See for instance for http://stackoverflow.com/questions/561245/virtual-memory-usage-from-java-under-linux-too-much-memory-used
for an rather up to date detailled explanation, why vmem doesn't matter.

So we allow it to be tremendiously large. Why does it matter, anyhow? Java8 seems to use memory mapped I/O agressivly now, and the virtual memory in the hadoop mapred container became exhausted when the resident memory is only 15% used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)