You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@whirr.apache.org by "Tibor Kiss (JIRA)" <ji...@apache.org> on 2010/11/29 18:05:10 UTC

[jira] Updated: (WHIRR-146) Changing the mapred.child.java.opts value does not change the heap size from a default one.

     [ https://issues.apache.org/jira/browse/WHIRR-146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tibor Kiss updated WHIRR-146:
-----------------------------

    Attachment: whirr-146.patch

Here is a patch which works for me.

In order to JUnit test it, probably we would need to write a job which runs in integration tests. 
I'm not sure if we are changing only the install scripts which are also changeable when you would like to personalize the setup, is it really necessary to overload the integration tests at all?

> Changing the mapred.child.java.opts value does not change the heap size from a default one.
> -------------------------------------------------------------------------------------------
>
>                 Key: WHIRR-146
>                 URL: https://issues.apache.org/jira/browse/WHIRR-146
>             Project: Whirr
>          Issue Type: Bug
>         Environment: Amazon EC2, Amazon Linux images.
>            Reporter: Tibor Kiss
>            Assignee: Tibor Kiss
>         Attachments: whirr-146.patch
>
>
> Even if I change the value for mapred.child.java.opts the task is started with -Xmx200m.
> Since the mapred.child.java.opts and mapred.child.ulimit has been deprecated, we need to set the mapred.map.child.java.opts, mapred.reduce.child.java.opts respectively the mapred.map.child.ulimit and mapred.reduce.child.ulimit in order to have any effect.
> Unfortunately the /scripts/cdh/install and /scripts/apache/install which generates the /etc/hadoop/conf.dist/hadoop-site.xml is not synchronized with this deprecation as a result we are not able to use mappers and reducers which does not fit in 200M heap size.
> How to reproduce: 
> 1. Start a cluster on large instances where we are using 64bit jvm and run a simple distcp, you will experience Child jvm crash.
> 2. Or run a job with mappers or reducers which does not fit in 200M heap, it will experience OutOfMemoryError in child processes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.