You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@whirr.apache.org by "Tibor Kiss (JIRA)" <ji...@apache.org> on 2010/11/29 17:53:16 UTC

[jira] Created: (WHIRR-146) Changing the mapred.child.java.opts value does not change the heap size from a default one.

Changing the mapred.child.java.opts value does not change the heap size from a default one.
-------------------------------------------------------------------------------------------

                 Key: WHIRR-146
                 URL: https://issues.apache.org/jira/browse/WHIRR-146
             Project: Whirr
          Issue Type: Bug
         Environment: Amazon EC2, Amazon Linux images.
            Reporter: Tibor Kiss
            Assignee: Tibor Kiss


Even if I change the value for mapred.child.java.opts the task is started with -Xmx200m.
Since the mapred.child.java.opts and mapred.child.ulimit has been deprecated, we need to set the mapred.map.child.java.opts, mapred.reduce.child.java.opts respectively the mapred.map.child.ulimit and mapred.reduce.child.ulimit in order to have any effect.
Unfortunately the /scripts/cdh/install and /scripts/apache/install which generates the /etc/hadoop/conf.dist/hadoop-site.xml is not synchronized with this deprecation as a result we are not able to use mappers and reducers which does not fit in 200M heap size.

How to reproduce: 
1. Start a cluster on large instances where we are using 64bit jvm and run a simple distcp, you will experience Child jvm crash.
2. Or run a job with mappers or reducers which does not fit in 200M heap, it will experience OutOfMemoryError in child processes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (WHIRR-146) Changing the mapred.child.java.opts value does not change the heap size from a default one.

Posted by "Tom White (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/WHIRR-146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tom White updated WHIRR-146:
----------------------------

       Resolution: Fixed
    Fix Version/s: 0.3.0
           Status: Resolved  (was: Patch Available)

I've just committed this. Thanks Tibor!

> Changing the mapred.child.java.opts value does not change the heap size from a default one.
> -------------------------------------------------------------------------------------------
>
>                 Key: WHIRR-146
>                 URL: https://issues.apache.org/jira/browse/WHIRR-146
>             Project: Whirr
>          Issue Type: Bug
>         Environment: Amazon EC2, Amazon Linux images.
>            Reporter: Tibor Kiss
>            Assignee: Tibor Kiss
>             Fix For: 0.3.0
>
>         Attachments: WHIRR-146.patch, whirr-146.patch
>
>
> Even if I change the value for mapred.child.java.opts the task is started with -Xmx200m.
> Since the mapred.child.java.opts and mapred.child.ulimit has been deprecated, we need to set the mapred.map.child.java.opts, mapred.reduce.child.java.opts respectively the mapred.map.child.ulimit and mapred.reduce.child.ulimit in order to have any effect.
> Unfortunately the /scripts/cdh/install and /scripts/apache/install which generates the /etc/hadoop/conf.dist/hadoop-site.xml is not synchronized with this deprecation as a result we are not able to use mappers and reducers which does not fit in 200M heap size.
> How to reproduce: 
> 1. Start a cluster on large instances where we are using 64bit jvm and run a simple distcp, you will experience Child jvm crash.
> 2. Or run a job with mappers or reducers which does not fit in 200M heap, it will experience OutOfMemoryError in child processes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (WHIRR-146) Changing the mapred.child.java.opts value does not change the heap size from a default one.

Posted by "Tom White (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/WHIRR-146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965574#action_12965574 ] 

Tom White commented on WHIRR-146:
---------------------------------

Looking at this more, the new properties aren't in Hadoop 0.20.2, so we should revert the part for apache/hadoop/post-configure. Tibor, do you agree?

> Changing the mapred.child.java.opts value does not change the heap size from a default one.
> -------------------------------------------------------------------------------------------
>
>                 Key: WHIRR-146
>                 URL: https://issues.apache.org/jira/browse/WHIRR-146
>             Project: Whirr
>          Issue Type: Bug
>         Environment: Amazon EC2, Amazon Linux images.
>            Reporter: Tibor Kiss
>            Assignee: Tibor Kiss
>             Fix For: 0.3.0
>
>         Attachments: WHIRR-146.patch, whirr-146.patch
>
>
> Even if I change the value for mapred.child.java.opts the task is started with -Xmx200m.
> Since the mapred.child.java.opts and mapred.child.ulimit has been deprecated, we need to set the mapred.map.child.java.opts, mapred.reduce.child.java.opts respectively the mapred.map.child.ulimit and mapred.reduce.child.ulimit in order to have any effect.
> Unfortunately the /scripts/cdh/install and /scripts/apache/install which generates the /etc/hadoop/conf.dist/hadoop-site.xml is not synchronized with this deprecation as a result we are not able to use mappers and reducers which does not fit in 200M heap size.
> How to reproduce: 
> 1. Start a cluster on large instances where we are using 64bit jvm and run a simple distcp, you will experience Child jvm crash.
> 2. Or run a job with mappers or reducers which does not fit in 200M heap, it will experience OutOfMemoryError in child processes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (WHIRR-146) Changing the mapred.child.java.opts value does not change the heap size from a default one.

Posted by "Tibor Kiss (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/WHIRR-146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965605#action_12965605 ] 

Tibor Kiss commented on WHIRR-146:
----------------------------------

I agree.

> Changing the mapred.child.java.opts value does not change the heap size from a default one.
> -------------------------------------------------------------------------------------------
>
>                 Key: WHIRR-146
>                 URL: https://issues.apache.org/jira/browse/WHIRR-146
>             Project: Whirr
>          Issue Type: Bug
>         Environment: Amazon EC2, Amazon Linux images.
>            Reporter: Tibor Kiss
>            Assignee: Tibor Kiss
>             Fix For: 0.3.0
>
>         Attachments: WHIRR-146.patch, whirr-146.patch
>
>
> Even if I change the value for mapred.child.java.opts the task is started with -Xmx200m.
> Since the mapred.child.java.opts and mapred.child.ulimit has been deprecated, we need to set the mapred.map.child.java.opts, mapred.reduce.child.java.opts respectively the mapred.map.child.ulimit and mapred.reduce.child.ulimit in order to have any effect.
> Unfortunately the /scripts/cdh/install and /scripts/apache/install which generates the /etc/hadoop/conf.dist/hadoop-site.xml is not synchronized with this deprecation as a result we are not able to use mappers and reducers which does not fit in 200M heap size.
> How to reproduce: 
> 1. Start a cluster on large instances where we are using 64bit jvm and run a simple distcp, you will experience Child jvm crash.
> 2. Or run a job with mappers or reducers which does not fit in 200M heap, it will experience OutOfMemoryError in child processes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (WHIRR-146) Changing the mapred.child.java.opts value does not change the heap size from a default one.

Posted by "Tibor Kiss (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/WHIRR-146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tibor Kiss updated WHIRR-146:
-----------------------------

    Attachment: whirr-146.patch

Here is a patch which works for me.

In order to JUnit test it, probably we would need to write a job which runs in integration tests. 
I'm not sure if we are changing only the install scripts which are also changeable when you would like to personalize the setup, is it really necessary to overload the integration tests at all?

> Changing the mapred.child.java.opts value does not change the heap size from a default one.
> -------------------------------------------------------------------------------------------
>
>                 Key: WHIRR-146
>                 URL: https://issues.apache.org/jira/browse/WHIRR-146
>             Project: Whirr
>          Issue Type: Bug
>         Environment: Amazon EC2, Amazon Linux images.
>            Reporter: Tibor Kiss
>            Assignee: Tibor Kiss
>         Attachments: whirr-146.patch
>
>
> Even if I change the value for mapred.child.java.opts the task is started with -Xmx200m.
> Since the mapred.child.java.opts and mapred.child.ulimit has been deprecated, we need to set the mapred.map.child.java.opts, mapred.reduce.child.java.opts respectively the mapred.map.child.ulimit and mapred.reduce.child.ulimit in order to have any effect.
> Unfortunately the /scripts/cdh/install and /scripts/apache/install which generates the /etc/hadoop/conf.dist/hadoop-site.xml is not synchronized with this deprecation as a result we are not able to use mappers and reducers which does not fit in 200M heap size.
> How to reproduce: 
> 1. Start a cluster on large instances where we are using 64bit jvm and run a simple distcp, you will experience Child jvm crash.
> 2. Or run a job with mappers or reducers which does not fit in 200M heap, it will experience OutOfMemoryError in child processes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (WHIRR-146) Changing the mapred.child.java.opts value does not change the heap size from a default one.

Posted by "Tibor Kiss (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/WHIRR-146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965144#action_12965144 ] 

Tibor Kiss commented on WHIRR-146:
----------------------------------

Thank you, Tom!

> Changing the mapred.child.java.opts value does not change the heap size from a default one.
> -------------------------------------------------------------------------------------------
>
>                 Key: WHIRR-146
>                 URL: https://issues.apache.org/jira/browse/WHIRR-146
>             Project: Whirr
>          Issue Type: Bug
>         Environment: Amazon EC2, Amazon Linux images.
>            Reporter: Tibor Kiss
>            Assignee: Tibor Kiss
>         Attachments: WHIRR-146.patch, whirr-146.patch
>
>
> Even if I change the value for mapred.child.java.opts the task is started with -Xmx200m.
> Since the mapred.child.java.opts and mapred.child.ulimit has been deprecated, we need to set the mapred.map.child.java.opts, mapred.reduce.child.java.opts respectively the mapred.map.child.ulimit and mapred.reduce.child.ulimit in order to have any effect.
> Unfortunately the /scripts/cdh/install and /scripts/apache/install which generates the /etc/hadoop/conf.dist/hadoop-site.xml is not synchronized with this deprecation as a result we are not able to use mappers and reducers which does not fit in 200M heap size.
> How to reproduce: 
> 1. Start a cluster on large instances where we are using 64bit jvm and run a simple distcp, you will experience Child jvm crash.
> 2. Or run a job with mappers or reducers which does not fit in 200M heap, it will experience OutOfMemoryError in child processes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (WHIRR-146) Changing the mapred.child.java.opts value does not change the heap size from a default one.

Posted by "Tom White (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/WHIRR-146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tom White updated WHIRR-146:
----------------------------

    Attachment: WHIRR-146.patch

I regenerated the patch following WHIRR-87.

> Changing the mapred.child.java.opts value does not change the heap size from a default one.
> -------------------------------------------------------------------------------------------
>
>                 Key: WHIRR-146
>                 URL: https://issues.apache.org/jira/browse/WHIRR-146
>             Project: Whirr
>          Issue Type: Bug
>         Environment: Amazon EC2, Amazon Linux images.
>            Reporter: Tibor Kiss
>            Assignee: Tibor Kiss
>         Attachments: WHIRR-146.patch, whirr-146.patch
>
>
> Even if I change the value for mapred.child.java.opts the task is started with -Xmx200m.
> Since the mapred.child.java.opts and mapred.child.ulimit has been deprecated, we need to set the mapred.map.child.java.opts, mapred.reduce.child.java.opts respectively the mapred.map.child.ulimit and mapred.reduce.child.ulimit in order to have any effect.
> Unfortunately the /scripts/cdh/install and /scripts/apache/install which generates the /etc/hadoop/conf.dist/hadoop-site.xml is not synchronized with this deprecation as a result we are not able to use mappers and reducers which does not fit in 200M heap size.
> How to reproduce: 
> 1. Start a cluster on large instances where we are using 64bit jvm and run a simple distcp, you will experience Child jvm crash.
> 2. Or run a job with mappers or reducers which does not fit in 200M heap, it will experience OutOfMemoryError in child processes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (WHIRR-146) Changing the mapred.child.java.opts value does not change the heap size from a default one.

Posted by "Tom White (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/WHIRR-146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Tom White updated WHIRR-146:
----------------------------

    Status: Patch Available  (was: Open)

> Changing the mapred.child.java.opts value does not change the heap size from a default one.
> -------------------------------------------------------------------------------------------
>
>                 Key: WHIRR-146
>                 URL: https://issues.apache.org/jira/browse/WHIRR-146
>             Project: Whirr
>          Issue Type: Bug
>         Environment: Amazon EC2, Amazon Linux images.
>            Reporter: Tibor Kiss
>            Assignee: Tibor Kiss
>         Attachments: whirr-146.patch
>
>
> Even if I change the value for mapred.child.java.opts the task is started with -Xmx200m.
> Since the mapred.child.java.opts and mapred.child.ulimit has been deprecated, we need to set the mapred.map.child.java.opts, mapred.reduce.child.java.opts respectively the mapred.map.child.ulimit and mapred.reduce.child.ulimit in order to have any effect.
> Unfortunately the /scripts/cdh/install and /scripts/apache/install which generates the /etc/hadoop/conf.dist/hadoop-site.xml is not synchronized with this deprecation as a result we are not able to use mappers and reducers which does not fit in 200M heap size.
> How to reproduce: 
> 1. Start a cluster on large instances where we are using 64bit jvm and run a simple distcp, you will experience Child jvm crash.
> 2. Or run a job with mappers or reducers which does not fit in 200M heap, it will experience OutOfMemoryError in child processes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (WHIRR-146) Changing the mapred.child.java.opts value does not change the heap size from a default one.

Posted by "Tom White (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/WHIRR-146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965753#action_12965753 ] 

Tom White commented on WHIRR-146:
---------------------------------

OK, I reverted that part of the patch. Thanks.

> Changing the mapred.child.java.opts value does not change the heap size from a default one.
> -------------------------------------------------------------------------------------------
>
>                 Key: WHIRR-146
>                 URL: https://issues.apache.org/jira/browse/WHIRR-146
>             Project: Whirr
>          Issue Type: Bug
>         Environment: Amazon EC2, Amazon Linux images.
>            Reporter: Tibor Kiss
>            Assignee: Tibor Kiss
>             Fix For: 0.3.0
>
>         Attachments: WHIRR-146.patch, whirr-146.patch
>
>
> Even if I change the value for mapred.child.java.opts the task is started with -Xmx200m.
> Since the mapred.child.java.opts and mapred.child.ulimit has been deprecated, we need to set the mapred.map.child.java.opts, mapred.reduce.child.java.opts respectively the mapred.map.child.ulimit and mapred.reduce.child.ulimit in order to have any effect.
> Unfortunately the /scripts/cdh/install and /scripts/apache/install which generates the /etc/hadoop/conf.dist/hadoop-site.xml is not synchronized with this deprecation as a result we are not able to use mappers and reducers which does not fit in 200M heap size.
> How to reproduce: 
> 1. Start a cluster on large instances where we are using 64bit jvm and run a simple distcp, you will experience Child jvm crash.
> 2. Or run a job with mappers or reducers which does not fit in 200M heap, it will experience OutOfMemoryError in child processes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (WHIRR-146) Changing the mapred.child.java.opts value does not change the heap size from a default one.

Posted by "Tom White (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/WHIRR-146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12964856#action_12964856 ] 

Tom White commented on WHIRR-146:
---------------------------------

+1 looks good

> In order to JUnit test it, probably we would need to write a job which runs in integration tests. 

Adding jobs to the benchmark suites that will be introduced in WHIRR-92 is probably the way to do this.

> I'm not sure if we are changing only the install scripts which are also changeable when you would like to personalize the setup

Changing the install scripts is not very user friendly at the moment. WHIRR-55 will make this easier.

> Changing the mapred.child.java.opts value does not change the heap size from a default one.
> -------------------------------------------------------------------------------------------
>
>                 Key: WHIRR-146
>                 URL: https://issues.apache.org/jira/browse/WHIRR-146
>             Project: Whirr
>          Issue Type: Bug
>         Environment: Amazon EC2, Amazon Linux images.
>            Reporter: Tibor Kiss
>            Assignee: Tibor Kiss
>         Attachments: whirr-146.patch
>
>
> Even if I change the value for mapred.child.java.opts the task is started with -Xmx200m.
> Since the mapred.child.java.opts and mapred.child.ulimit has been deprecated, we need to set the mapred.map.child.java.opts, mapred.reduce.child.java.opts respectively the mapred.map.child.ulimit and mapred.reduce.child.ulimit in order to have any effect.
> Unfortunately the /scripts/cdh/install and /scripts/apache/install which generates the /etc/hadoop/conf.dist/hadoop-site.xml is not synchronized with this deprecation as a result we are not able to use mappers and reducers which does not fit in 200M heap size.
> How to reproduce: 
> 1. Start a cluster on large instances where we are using 64bit jvm and run a simple distcp, you will experience Child jvm crash.
> 2. Or run a job with mappers or reducers which does not fit in 200M heap, it will experience OutOfMemoryError in child processes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.