You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@whirr.apache.org by "Jai Kumar Singh (Created) (JIRA)" <ji...@apache.org> on 2012/01/30 10:05:10 UTC

[jira] [Created] (WHIRR-490) hadoop-mapreduce.mapred.child.ulimit should be unlimited by default

hadoop-mapreduce.mapred.child.ulimit should be unlimited by default
-------------------------------------------------------------------

                 Key: WHIRR-490
                 URL: https://issues.apache.org/jira/browse/WHIRR-490
             Project: Whirr
          Issue Type: Bug
          Components: service/hadoop
    Affects Versions: 0.7.0
         Environment: Hadoop Cluster on Amaazon EC2
            Reporter: Jai Kumar Singh


Last week, I was struggling to run hadoop job(simple wordcount example program comes with hadoop ) on amazon ec2. Jobs were dying saying
" Could not create the Java virtual machine ". It took me a while to figure out that ulimit was the problem. Whirr by default assign ulimit to a constant number but not unlimited. Setting mapred.child.java.opts to any number ( I tried from 64mb to 4096mb on t1.micro instances to m2.4xlarge ) was throwing error.
More details on 
https://issues.apache.org/jira/browse/HADOOP-7989

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (WHIRR-490) hadoop-mapreduce.mapred.child.ulimit should be unlimited by default

Posted by "Andrei Savu (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/WHIRR-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13215092#comment-13215092 ] 

Andrei Savu commented on WHIRR-490:
-----------------------------------

Ok. I will remove the line from the defaults file. I'm happy we found this before making the release. 
                
> hadoop-mapreduce.mapred.child.ulimit should be unlimited by default
> -------------------------------------------------------------------
>
>                 Key: WHIRR-490
>                 URL: https://issues.apache.org/jira/browse/WHIRR-490
>             Project: Whirr
>          Issue Type: Bug
>          Components: service/hadoop
>    Affects Versions: 0.7.0
>         Environment: Hadoop Cluster on Amaazon EC2
>            Reporter: Jai Kumar Singh
>            Assignee: Andrei Savu
>             Fix For: 0.7.1, 0.8.0
>
>         Attachments: WHIRR-490.patch
>
>
> Last week, I was struggling to run hadoop job(simple wordcount example program comes with hadoop ) on amazon ec2. Jobs were dying saying
> " Could not create the Java virtual machine ". It took me a while to figure out that ulimit was the problem. Whirr by default assign ulimit to a constant number but not unlimited. Setting mapred.child.java.opts to any number ( I tried from 64mb to 4096mb on t1.micro instances to m2.4xlarge ) was throwing error.
> More details on 
> https://issues.apache.org/jira/browse/HADOOP-7989

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (WHIRR-490) hadoop-mapreduce.mapred.child.ulimit should be unlimited by default

Posted by "Andrei Savu (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/WHIRR-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13198174#comment-13198174 ] 

Andrei Savu commented on WHIRR-490:
-----------------------------------

I have just checked the code and the current value is set to hadoop-mapreduce.mapred.child.ulimit=1126400 - which looks large enough to me. Tom any feedback on this?
                
> hadoop-mapreduce.mapred.child.ulimit should be unlimited by default
> -------------------------------------------------------------------
>
>                 Key: WHIRR-490
>                 URL: https://issues.apache.org/jira/browse/WHIRR-490
>             Project: Whirr
>          Issue Type: Bug
>          Components: service/hadoop
>    Affects Versions: 0.7.0
>         Environment: Hadoop Cluster on Amaazon EC2
>            Reporter: Jai Kumar Singh
>            Assignee: Andrei Savu
>             Fix For: 0.7.1
>
>
> Last week, I was struggling to run hadoop job(simple wordcount example program comes with hadoop ) on amazon ec2. Jobs were dying saying
> " Could not create the Java virtual machine ". It took me a while to figure out that ulimit was the problem. Whirr by default assign ulimit to a constant number but not unlimited. Setting mapred.child.java.opts to any number ( I tried from 64mb to 4096mb on t1.micro instances to m2.4xlarge ) was throwing error.
> More details on 
> https://issues.apache.org/jira/browse/HADOOP-7989

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (WHIRR-490) hadoop-mapreduce.mapred.child.ulimit should be unlimited by default

Posted by "Andrei Savu (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/WHIRR-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13198179#comment-13198179 ] 

Andrei Savu commented on WHIRR-490:
-----------------------------------

Jai this was the only change you've made to make things work? 
                
> hadoop-mapreduce.mapred.child.ulimit should be unlimited by default
> -------------------------------------------------------------------
>
>                 Key: WHIRR-490
>                 URL: https://issues.apache.org/jira/browse/WHIRR-490
>             Project: Whirr
>          Issue Type: Bug
>          Components: service/hadoop
>    Affects Versions: 0.7.0
>         Environment: Hadoop Cluster on Amaazon EC2
>            Reporter: Jai Kumar Singh
>            Assignee: Andrei Savu
>             Fix For: 0.7.1
>
>         Attachments: WHIRR-490.patch
>
>
> Last week, I was struggling to run hadoop job(simple wordcount example program comes with hadoop ) on amazon ec2. Jobs were dying saying
> " Could not create the Java virtual machine ". It took me a while to figure out that ulimit was the problem. Whirr by default assign ulimit to a constant number but not unlimited. Setting mapred.child.java.opts to any number ( I tried from 64mb to 4096mb on t1.micro instances to m2.4xlarge ) was throwing error.
> More details on 
> https://issues.apache.org/jira/browse/HADOOP-7989

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (WHIRR-490) hadoop-mapreduce.mapred.child.ulimit should be unlimited by default

Posted by "Tom White (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/WHIRR-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13215070#comment-13215070 ] 

Tom White commented on WHIRR-490:
---------------------------------

Leaving it unset will have the effect of making it unlimited. Sorry for the confusion.
                
> hadoop-mapreduce.mapred.child.ulimit should be unlimited by default
> -------------------------------------------------------------------
>
>                 Key: WHIRR-490
>                 URL: https://issues.apache.org/jira/browse/WHIRR-490
>             Project: Whirr
>          Issue Type: Bug
>          Components: service/hadoop
>    Affects Versions: 0.7.0
>         Environment: Hadoop Cluster on Amaazon EC2
>            Reporter: Jai Kumar Singh
>            Assignee: Andrei Savu
>             Fix For: 0.7.1, 0.8.0
>
>         Attachments: WHIRR-490.patch
>
>
> Last week, I was struggling to run hadoop job(simple wordcount example program comes with hadoop ) on amazon ec2. Jobs were dying saying
> " Could not create the Java virtual machine ". It took me a while to figure out that ulimit was the problem. Whirr by default assign ulimit to a constant number but not unlimited. Setting mapred.child.java.opts to any number ( I tried from 64mb to 4096mb on t1.micro instances to m2.4xlarge ) was throwing error.
> More details on 
> https://issues.apache.org/jira/browse/HADOOP-7989

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (WHIRR-490) hadoop-mapreduce.mapred.child.ulimit should be unlimited by default

Posted by "Andrei Savu (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/WHIRR-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13215120#comment-13215120 ] 

Andrei Savu commented on WHIRR-490:
-----------------------------------

Done. Thanks Tom! Committed change to trunk & branch 0.7. Next: bump version number, build RC, send vote email, update CHANGES.txt on trunk & add release notes. 
                
> hadoop-mapreduce.mapred.child.ulimit should be unlimited by default
> -------------------------------------------------------------------
>
>                 Key: WHIRR-490
>                 URL: https://issues.apache.org/jira/browse/WHIRR-490
>             Project: Whirr
>          Issue Type: Bug
>          Components: service/hadoop
>    Affects Versions: 0.7.0
>         Environment: Hadoop Cluster on Amaazon EC2
>            Reporter: Jai Kumar Singh
>            Assignee: Andrei Savu
>             Fix For: 0.7.1, 0.8.0
>
>         Attachments: WHIRR-490.patch
>
>
> Last week, I was struggling to run hadoop job(simple wordcount example program comes with hadoop ) on amazon ec2. Jobs were dying saying
> " Could not create the Java virtual machine ". It took me a while to figure out that ulimit was the problem. Whirr by default assign ulimit to a constant number but not unlimited. Setting mapred.child.java.opts to any number ( I tried from 64mb to 4096mb on t1.micro instances to m2.4xlarge ) was throwing error.
> More details on 
> https://issues.apache.org/jira/browse/HADOOP-7989

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (WHIRR-490) hadoop-mapreduce.mapred.child.ulimit should be unlimited by default

Posted by "Andrei Savu (Updated) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/WHIRR-490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrei Savu updated WHIRR-490:
------------------------------

    Fix Version/s: 0.7.1

Committed to 0.7 branch. 
                
> hadoop-mapreduce.mapred.child.ulimit should be unlimited by default
> -------------------------------------------------------------------
>
>                 Key: WHIRR-490
>                 URL: https://issues.apache.org/jira/browse/WHIRR-490
>             Project: Whirr
>          Issue Type: Bug
>          Components: service/hadoop
>    Affects Versions: 0.7.0
>         Environment: Hadoop Cluster on Amaazon EC2
>            Reporter: Jai Kumar Singh
>            Assignee: Andrei Savu
>             Fix For: 0.7.1, 0.8.0
>
>         Attachments: WHIRR-490.patch
>
>
> Last week, I was struggling to run hadoop job(simple wordcount example program comes with hadoop ) on amazon ec2. Jobs were dying saying
> " Could not create the Java virtual machine ". It took me a while to figure out that ulimit was the problem. Whirr by default assign ulimit to a constant number but not unlimited. Setting mapred.child.java.opts to any number ( I tried from 64mb to 4096mb on t1.micro instances to m2.4xlarge ) was throwing error.
> More details on 
> https://issues.apache.org/jira/browse/HADOOP-7989

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (WHIRR-490) hadoop-mapreduce.mapred.child.ulimit should be unlimited by default

Posted by "Jai Kumar Singh (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/WHIRR-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13198658#comment-13198658 ] 

Jai Kumar Singh commented on WHIRR-490:
---------------------------------------

@andrei savu: yes, making ulimit to unlimited was the only change to make example jar work. 
Though, for my specific job (memory hog) I needed to modify child opts too.


                
> hadoop-mapreduce.mapred.child.ulimit should be unlimited by default
> -------------------------------------------------------------------
>
>                 Key: WHIRR-490
>                 URL: https://issues.apache.org/jira/browse/WHIRR-490
>             Project: Whirr
>          Issue Type: Bug
>          Components: service/hadoop
>    Affects Versions: 0.7.0
>         Environment: Hadoop Cluster on Amaazon EC2
>            Reporter: Jai Kumar Singh
>            Assignee: Andrei Savu
>             Fix For: 0.7.1
>
>         Attachments: WHIRR-490.patch
>
>
> Last week, I was struggling to run hadoop job(simple wordcount example program comes with hadoop ) on amazon ec2. Jobs were dying saying
> " Could not create the Java virtual machine ". It took me a while to figure out that ulimit was the problem. Whirr by default assign ulimit to a constant number but not unlimited. Setting mapred.child.java.opts to any number ( I tried from 64mb to 4096mb on t1.micro instances to m2.4xlarge ) was throwing error.
> More details on 
> https://issues.apache.org/jira/browse/HADOOP-7989

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (WHIRR-490) hadoop-mapreduce.mapred.child.ulimit should be unlimited by default

Posted by "Andrei Savu (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/WHIRR-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13215062#comment-13215062 ] 

Andrei Savu commented on WHIRR-490:
-----------------------------------

>From http://hadoop.apache.org/common/docs/current/mapred-default.html about mapred.child.ulimit:

"The maximum virtual memory, in KB, of a process launched by the Map-Reduce framework. This can be used to control both the Mapper/Reducer tasks and applications using Hadoop Pipes, Hadoop Streaming etc. By default it is left unspecified to let cluster admins control it via limits.conf and other such relevant mechanisms. Note: mapred.child.ulimit must be greater than or equal to the -Xmx passed to JavaVM, else the VM might not start."

Tom it looks like "unlimited" is not a valid value for mapred.child.ulimit. Any suggestions? 

                
> hadoop-mapreduce.mapred.child.ulimit should be unlimited by default
> -------------------------------------------------------------------
>
>                 Key: WHIRR-490
>                 URL: https://issues.apache.org/jira/browse/WHIRR-490
>             Project: Whirr
>          Issue Type: Bug
>          Components: service/hadoop
>    Affects Versions: 0.7.0
>         Environment: Hadoop Cluster on Amaazon EC2
>            Reporter: Jai Kumar Singh
>            Assignee: Andrei Savu
>             Fix For: 0.7.1, 0.8.0
>
>         Attachments: WHIRR-490.patch
>
>
> Last week, I was struggling to run hadoop job(simple wordcount example program comes with hadoop ) on amazon ec2. Jobs were dying saying
> " Could not create the Java virtual machine ". It took me a while to figure out that ulimit was the problem. Whirr by default assign ulimit to a constant number but not unlimited. Setting mapred.child.java.opts to any number ( I tried from 64mb to 4096mb on t1.micro instances to m2.4xlarge ) was throwing error.
> More details on 
> https://issues.apache.org/jira/browse/HADOOP-7989

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (WHIRR-490) hadoop-mapreduce.mapred.child.ulimit should be unlimited by default

Posted by "Andrei Savu (Updated) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/WHIRR-490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrei Savu updated WHIRR-490:
------------------------------

    Fix Version/s: 0.7.1
         Assignee: Andrei Savu

Good catch! Scheduling this for 0.7.1. Thanks! 
                
> hadoop-mapreduce.mapred.child.ulimit should be unlimited by default
> -------------------------------------------------------------------
>
>                 Key: WHIRR-490
>                 URL: https://issues.apache.org/jira/browse/WHIRR-490
>             Project: Whirr
>          Issue Type: Bug
>          Components: service/hadoop
>    Affects Versions: 0.7.0
>         Environment: Hadoop Cluster on Amaazon EC2
>            Reporter: Jai Kumar Singh
>            Assignee: Andrei Savu
>             Fix For: 0.7.1
>
>
> Last week, I was struggling to run hadoop job(simple wordcount example program comes with hadoop ) on amazon ec2. Jobs were dying saying
> " Could not create the Java virtual machine ". It took me a while to figure out that ulimit was the problem. Whirr by default assign ulimit to a constant number but not unlimited. Setting mapred.child.java.opts to any number ( I tried from 64mb to 4096mb on t1.micro instances to m2.4xlarge ) was throwing error.
> More details on 
> https://issues.apache.org/jira/browse/HADOOP-7989

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (WHIRR-490) hadoop-mapreduce.mapred.child.ulimit should be unlimited by default

Posted by "Tom White (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/WHIRR-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13198414#comment-13198414 ] 

Tom White commented on WHIRR-490:
---------------------------------

I think this is fine as a default. +1
                
> hadoop-mapreduce.mapred.child.ulimit should be unlimited by default
> -------------------------------------------------------------------
>
>                 Key: WHIRR-490
>                 URL: https://issues.apache.org/jira/browse/WHIRR-490
>             Project: Whirr
>          Issue Type: Bug
>          Components: service/hadoop
>    Affects Versions: 0.7.0
>         Environment: Hadoop Cluster on Amaazon EC2
>            Reporter: Jai Kumar Singh
>            Assignee: Andrei Savu
>             Fix For: 0.7.1
>
>         Attachments: WHIRR-490.patch
>
>
> Last week, I was struggling to run hadoop job(simple wordcount example program comes with hadoop ) on amazon ec2. Jobs were dying saying
> " Could not create the Java virtual machine ". It took me a while to figure out that ulimit was the problem. Whirr by default assign ulimit to a constant number but not unlimited. Setting mapred.child.java.opts to any number ( I tried from 64mb to 4096mb on t1.micro instances to m2.4xlarge ) was throwing error.
> More details on 
> https://issues.apache.org/jira/browse/HADOOP-7989

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (WHIRR-490) hadoop-mapreduce.mapred.child.ulimit should be unlimited by default

Posted by "Andrei Savu (Updated) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/WHIRR-490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrei Savu updated WHIRR-490:
------------------------------

    Attachment: WHIRR-490.patch

Attached trivial patch to set unlimited. 
                
> hadoop-mapreduce.mapred.child.ulimit should be unlimited by default
> -------------------------------------------------------------------
>
>                 Key: WHIRR-490
>                 URL: https://issues.apache.org/jira/browse/WHIRR-490
>             Project: Whirr
>          Issue Type: Bug
>          Components: service/hadoop
>    Affects Versions: 0.7.0
>         Environment: Hadoop Cluster on Amaazon EC2
>            Reporter: Jai Kumar Singh
>            Assignee: Andrei Savu
>             Fix For: 0.7.1
>
>         Attachments: WHIRR-490.patch
>
>
> Last week, I was struggling to run hadoop job(simple wordcount example program comes with hadoop ) on amazon ec2. Jobs were dying saying
> " Could not create the Java virtual machine ". It took me a while to figure out that ulimit was the problem. Whirr by default assign ulimit to a constant number but not unlimited. Setting mapred.child.java.opts to any number ( I tried from 64mb to 4096mb on t1.micro instances to m2.4xlarge ) was throwing error.
> More details on 
> https://issues.apache.org/jira/browse/HADOOP-7989

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira