You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Alex Newman <po...@gmail.com> on 2014/09/29 18:40:25 UTC

Using Yarn in end to end tests

I am currently developing tests that use a mini yarn cluster. Because it is
running on circle-ci I need to use the absolute minimum amount of
memory.

I'm currently setting
    conf.setFloat("yarn.
nodemanager.vmem-pmem-ratio", 8.0f);
    conf.setBoolean("mapreduce.map.speculative", false);
    conf.setBoolean("mapreduce.reduce.speculative", false);
    conf.setInt("yarn.scheduler.minimum-allocation-mb", 128);
    conf.setInt("yarn.scheduler.maximum-allocation-mb", 256);
    conf.setInt("yarn.nodemanager.resource.memory-mb", 256);
    conf.setInt("mapreduce.map.memory.mb", 128);
    conf.set("mapreduce.map.java.opts", "-Xmx128m");

    conf.setInt("mapreduce.reduce.memory.mb", 128);
    conf.set("mapreduce.reduce.java.opts", "-Xmx128m");
    conf.setInt("mapreduce.task.io.sort.mb", 64);

    conf.setInt("yarn.app.mapreduce.am.resource.mb", 128);
    conf.set("yarn.app.mapreduce.am.command-opts", "-Xmx109m");

    conf.setInt("yarn.scheduler.minimum-allocation-vcores", 1);
    conf.setInt("yarn.scheduler.maximum-allocation-vcores", 1);
    conf.setInt("yarn.nodemanager.resource.cpu-vcores", 1);
    conf.setInt("mapreduce.map.cpu.vcore", 1);
    conf.setInt("mapreduce.reduce.cpu.vcore", 1);

    conf.setInt("mapreduce.tasktracker.map.tasks.maximum", 1);
    conf.setInt("mapreduce.tasktracker.reduce.tasks.maximum", 1);

    conf.setInt("yarn.scheduler.capacity.root.capacity",1);
    conf.setInt("yarn.scheduler.capacity.maximum-applications", 1);
    conf.setInt("mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob", 1);

but I am still seeing many child tasks running
https://circle-artifacts.com/gh/OhmData/hbase-public/314/artifacts/2/tmp/memory-usage.txt

Any ideas on how to actually limit yarn to one or two children at a time?

Re: Using Yarn in end to end tests

Posted by Alex Newman <po...@gmail.com>.
Woops sorry about that. That link contains

   PID   RSS %CPU COMMAND
 22826 1034888 30.8 /usr/lib/jvm/jdk1.7.0/jre/bin/java
-enableassertions -XX:MaxDirectMemorySize=1G -Xmx1900m
-XX:MaxPermSize=256m -Djava.security.egd=file:/dev/./urandom
-Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -jar
/home/ubuntu/hbase-public/hbase-server/target/surefire/surefirebooter8209143835148437708.jar
/home/ubuntu/hbase-public/hbase-server/target/surefire/surefire6285278142119084921tmp
/home/ubuntu/hbase-public/hbase-server/target/surefire/surefire_514715805625755352301tmp
  6780 738792 4.9 /usr/lib/jvm/jdk1.7.0/bin/java -Xmx2048m -classpath
/home/ubuntu/.m2/apache-maven-3.2.1/boot/plexus-classworlds-2.5.1.jar
-Dclassworlds.conf=/home/ubuntu/.m2/apache-maven-3.2.1/bin/m2.conf
-Dmaven.home=/home/ubuntu/.m2/apache-maven-3.2.1
org.codehaus.plexus.classworlds.launcher.Launcher
-Dsurefire.timeout=9000 -pl hbase-server test
-PrunVerySlowMapReduceTests
 29588 231136 16.4 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_3/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000004/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_3/application_1411955296017_0003/container_1411955296017_0003_01_000004
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000002_0 4
 29575 229832 16.0 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_0/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000005/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_0/application_1411955296017_0003/container_1411955296017_0003_01_000005
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000003_0 5
 29804 227148 15.1 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_1/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000013/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_1/application_1411955296017_0003/container_1411955296017_0003_01_000013
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000011_0 13
 29619 224348 15.3 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_3/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000012/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_1/application_1411955296017_0003/container_1411955296017_0003_01_000012
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000010_0 12
 29586 224152 16.4 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_2/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000010/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_1/application_1411955296017_0003/container_1411955296017_0003_01_000010
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000008_0 10
 29565 221344 15.3 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_2/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000011/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_1/application_1411955296017_0003/container_1411955296017_0003_01_000011
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000009_0 11
 29793 221176 15.1 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_2/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000006/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_2/application_1411955296017_0003/container_1411955296017_0003_01_000006
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000004_0 6


On Mon, Sep 29, 2014 at 9:43 AM, Ted Yu <yu...@gmail.com> wrote:
> I got the following message after clicking on the link:
>
> must be logged in
>
> Can you give login information ?
>
> Cheers
>
>
> On Mon, Sep 29, 2014 at 9:40 AM, Alex Newman <po...@gmail.com> wrote:
>>
>> I am currently developing tests that use a mini yarn cluster. Because it
>> is
>> running on circle-ci I need to use the absolute minimum amount of
>> memory.
>>
>> I'm currently setting
>>     conf.setFloat("yarn.
>> nodemanager.vmem-pmem-ratio", 8.0f);
>>     conf.setBoolean("mapreduce.map.speculative", false);
>>     conf.setBoolean("mapreduce.reduce.speculative", false);
>>     conf.setInt("yarn.scheduler.minimum-allocation-mb", 128);
>>     conf.setInt("yarn.scheduler.maximum-allocation-mb", 256);
>>     conf.setInt("yarn.nodemanager.resource.memory-mb", 256);
>>     conf.setInt("mapreduce.map.memory.mb", 128);
>>     conf.set("mapreduce.map.java.opts", "-Xmx128m");
>>
>>     conf.setInt("mapreduce.reduce.memory.mb", 128);
>>     conf.set("mapreduce.reduce.java.opts", "-Xmx128m");
>>     conf.setInt("mapreduce.task.io.sort.mb", 64);
>>
>>     conf.setInt("yarn.app.mapreduce.am.resource.mb", 128);
>>     conf.set("yarn.app.mapreduce.am.command-opts", "-Xmx109m");
>>
>>     conf.setInt("yarn.scheduler.minimum-allocation-vcores", 1);
>>     conf.setInt("yarn.scheduler.maximum-allocation-vcores", 1);
>>     conf.setInt("yarn.nodemanager.resource.cpu-vcores", 1);
>>     conf.setInt("mapreduce.map.cpu.vcore", 1);
>>     conf.setInt("mapreduce.reduce.cpu.vcore", 1);
>>
>>     conf.setInt("mapreduce.tasktracker.map.tasks.maximum", 1);
>>     conf.setInt("mapreduce.tasktracker.reduce.tasks.maximum", 1);
>>
>>     conf.setInt("yarn.scheduler.capacity.root.capacity",1);
>>     conf.setInt("yarn.scheduler.capacity.maximum-applications", 1);
>>
>> conf.setInt("mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob", 1);
>>
>> but I am still seeing many child tasks running
>>
>> https://circle-artifacts.com/gh/OhmData/hbase-public/314/artifacts/2/tmp/memory-usage.txt
>>
>> Any ideas on how to actually limit yarn to one or two children at a time?
>
>

Re: Using Yarn in end to end tests

Posted by Alex Newman <po...@gmail.com>.
Woops sorry about that. That link contains

   PID   RSS %CPU COMMAND
 22826 1034888 30.8 /usr/lib/jvm/jdk1.7.0/jre/bin/java
-enableassertions -XX:MaxDirectMemorySize=1G -Xmx1900m
-XX:MaxPermSize=256m -Djava.security.egd=file:/dev/./urandom
-Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -jar
/home/ubuntu/hbase-public/hbase-server/target/surefire/surefirebooter8209143835148437708.jar
/home/ubuntu/hbase-public/hbase-server/target/surefire/surefire6285278142119084921tmp
/home/ubuntu/hbase-public/hbase-server/target/surefire/surefire_514715805625755352301tmp
  6780 738792 4.9 /usr/lib/jvm/jdk1.7.0/bin/java -Xmx2048m -classpath
/home/ubuntu/.m2/apache-maven-3.2.1/boot/plexus-classworlds-2.5.1.jar
-Dclassworlds.conf=/home/ubuntu/.m2/apache-maven-3.2.1/bin/m2.conf
-Dmaven.home=/home/ubuntu/.m2/apache-maven-3.2.1
org.codehaus.plexus.classworlds.launcher.Launcher
-Dsurefire.timeout=9000 -pl hbase-server test
-PrunVerySlowMapReduceTests
 29588 231136 16.4 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_3/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000004/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_3/application_1411955296017_0003/container_1411955296017_0003_01_000004
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000002_0 4
 29575 229832 16.0 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_0/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000005/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_0/application_1411955296017_0003/container_1411955296017_0003_01_000005
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000003_0 5
 29804 227148 15.1 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_1/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000013/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_1/application_1411955296017_0003/container_1411955296017_0003_01_000013
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000011_0 13
 29619 224348 15.3 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_3/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000012/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_1/application_1411955296017_0003/container_1411955296017_0003_01_000012
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000010_0 12
 29586 224152 16.4 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_2/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000010/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_1/application_1411955296017_0003/container_1411955296017_0003_01_000010
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000008_0 10
 29565 221344 15.3 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_2/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000011/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_1/application_1411955296017_0003/container_1411955296017_0003_01_000011
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000009_0 11
 29793 221176 15.1 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_2/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000006/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_2/application_1411955296017_0003/container_1411955296017_0003_01_000006
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000004_0 6


On Mon, Sep 29, 2014 at 9:43 AM, Ted Yu <yu...@gmail.com> wrote:
> I got the following message after clicking on the link:
>
> must be logged in
>
> Can you give login information ?
>
> Cheers
>
>
> On Mon, Sep 29, 2014 at 9:40 AM, Alex Newman <po...@gmail.com> wrote:
>>
>> I am currently developing tests that use a mini yarn cluster. Because it
>> is
>> running on circle-ci I need to use the absolute minimum amount of
>> memory.
>>
>> I'm currently setting
>>     conf.setFloat("yarn.
>> nodemanager.vmem-pmem-ratio", 8.0f);
>>     conf.setBoolean("mapreduce.map.speculative", false);
>>     conf.setBoolean("mapreduce.reduce.speculative", false);
>>     conf.setInt("yarn.scheduler.minimum-allocation-mb", 128);
>>     conf.setInt("yarn.scheduler.maximum-allocation-mb", 256);
>>     conf.setInt("yarn.nodemanager.resource.memory-mb", 256);
>>     conf.setInt("mapreduce.map.memory.mb", 128);
>>     conf.set("mapreduce.map.java.opts", "-Xmx128m");
>>
>>     conf.setInt("mapreduce.reduce.memory.mb", 128);
>>     conf.set("mapreduce.reduce.java.opts", "-Xmx128m");
>>     conf.setInt("mapreduce.task.io.sort.mb", 64);
>>
>>     conf.setInt("yarn.app.mapreduce.am.resource.mb", 128);
>>     conf.set("yarn.app.mapreduce.am.command-opts", "-Xmx109m");
>>
>>     conf.setInt("yarn.scheduler.minimum-allocation-vcores", 1);
>>     conf.setInt("yarn.scheduler.maximum-allocation-vcores", 1);
>>     conf.setInt("yarn.nodemanager.resource.cpu-vcores", 1);
>>     conf.setInt("mapreduce.map.cpu.vcore", 1);
>>     conf.setInt("mapreduce.reduce.cpu.vcore", 1);
>>
>>     conf.setInt("mapreduce.tasktracker.map.tasks.maximum", 1);
>>     conf.setInt("mapreduce.tasktracker.reduce.tasks.maximum", 1);
>>
>>     conf.setInt("yarn.scheduler.capacity.root.capacity",1);
>>     conf.setInt("yarn.scheduler.capacity.maximum-applications", 1);
>>
>> conf.setInt("mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob", 1);
>>
>> but I am still seeing many child tasks running
>>
>> https://circle-artifacts.com/gh/OhmData/hbase-public/314/artifacts/2/tmp/memory-usage.txt
>>
>> Any ideas on how to actually limit yarn to one or two children at a time?
>
>

Re: Using Yarn in end to end tests

Posted by Alex Newman <po...@gmail.com>.
Woops sorry about that. That link contains

   PID   RSS %CPU COMMAND
 22826 1034888 30.8 /usr/lib/jvm/jdk1.7.0/jre/bin/java
-enableassertions -XX:MaxDirectMemorySize=1G -Xmx1900m
-XX:MaxPermSize=256m -Djava.security.egd=file:/dev/./urandom
-Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -jar
/home/ubuntu/hbase-public/hbase-server/target/surefire/surefirebooter8209143835148437708.jar
/home/ubuntu/hbase-public/hbase-server/target/surefire/surefire6285278142119084921tmp
/home/ubuntu/hbase-public/hbase-server/target/surefire/surefire_514715805625755352301tmp
  6780 738792 4.9 /usr/lib/jvm/jdk1.7.0/bin/java -Xmx2048m -classpath
/home/ubuntu/.m2/apache-maven-3.2.1/boot/plexus-classworlds-2.5.1.jar
-Dclassworlds.conf=/home/ubuntu/.m2/apache-maven-3.2.1/bin/m2.conf
-Dmaven.home=/home/ubuntu/.m2/apache-maven-3.2.1
org.codehaus.plexus.classworlds.launcher.Launcher
-Dsurefire.timeout=9000 -pl hbase-server test
-PrunVerySlowMapReduceTests
 29588 231136 16.4 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_3/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000004/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_3/application_1411955296017_0003/container_1411955296017_0003_01_000004
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000002_0 4
 29575 229832 16.0 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_0/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000005/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_0/application_1411955296017_0003/container_1411955296017_0003_01_000005
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000003_0 5
 29804 227148 15.1 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_1/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000013/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_1/application_1411955296017_0003/container_1411955296017_0003_01_000013
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000011_0 13
 29619 224348 15.3 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_3/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000012/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_1/application_1411955296017_0003/container_1411955296017_0003_01_000012
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000010_0 12
 29586 224152 16.4 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_2/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000010/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_1/application_1411955296017_0003/container_1411955296017_0003_01_000010
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000008_0 10
 29565 221344 15.3 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_2/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000011/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_1/application_1411955296017_0003/container_1411955296017_0003_01_000011
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000009_0 11
 29793 221176 15.1 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_2/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000006/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_2/application_1411955296017_0003/container_1411955296017_0003_01_000006
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000004_0 6


On Mon, Sep 29, 2014 at 9:43 AM, Ted Yu <yu...@gmail.com> wrote:
> I got the following message after clicking on the link:
>
> must be logged in
>
> Can you give login information ?
>
> Cheers
>
>
> On Mon, Sep 29, 2014 at 9:40 AM, Alex Newman <po...@gmail.com> wrote:
>>
>> I am currently developing tests that use a mini yarn cluster. Because it
>> is
>> running on circle-ci I need to use the absolute minimum amount of
>> memory.
>>
>> I'm currently setting
>>     conf.setFloat("yarn.
>> nodemanager.vmem-pmem-ratio", 8.0f);
>>     conf.setBoolean("mapreduce.map.speculative", false);
>>     conf.setBoolean("mapreduce.reduce.speculative", false);
>>     conf.setInt("yarn.scheduler.minimum-allocation-mb", 128);
>>     conf.setInt("yarn.scheduler.maximum-allocation-mb", 256);
>>     conf.setInt("yarn.nodemanager.resource.memory-mb", 256);
>>     conf.setInt("mapreduce.map.memory.mb", 128);
>>     conf.set("mapreduce.map.java.opts", "-Xmx128m");
>>
>>     conf.setInt("mapreduce.reduce.memory.mb", 128);
>>     conf.set("mapreduce.reduce.java.opts", "-Xmx128m");
>>     conf.setInt("mapreduce.task.io.sort.mb", 64);
>>
>>     conf.setInt("yarn.app.mapreduce.am.resource.mb", 128);
>>     conf.set("yarn.app.mapreduce.am.command-opts", "-Xmx109m");
>>
>>     conf.setInt("yarn.scheduler.minimum-allocation-vcores", 1);
>>     conf.setInt("yarn.scheduler.maximum-allocation-vcores", 1);
>>     conf.setInt("yarn.nodemanager.resource.cpu-vcores", 1);
>>     conf.setInt("mapreduce.map.cpu.vcore", 1);
>>     conf.setInt("mapreduce.reduce.cpu.vcore", 1);
>>
>>     conf.setInt("mapreduce.tasktracker.map.tasks.maximum", 1);
>>     conf.setInt("mapreduce.tasktracker.reduce.tasks.maximum", 1);
>>
>>     conf.setInt("yarn.scheduler.capacity.root.capacity",1);
>>     conf.setInt("yarn.scheduler.capacity.maximum-applications", 1);
>>
>> conf.setInt("mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob", 1);
>>
>> but I am still seeing many child tasks running
>>
>> https://circle-artifacts.com/gh/OhmData/hbase-public/314/artifacts/2/tmp/memory-usage.txt
>>
>> Any ideas on how to actually limit yarn to one or two children at a time?
>
>

Re: Using Yarn in end to end tests

Posted by Alex Newman <po...@gmail.com>.
Woops sorry about that. That link contains

   PID   RSS %CPU COMMAND
 22826 1034888 30.8 /usr/lib/jvm/jdk1.7.0/jre/bin/java
-enableassertions -XX:MaxDirectMemorySize=1G -Xmx1900m
-XX:MaxPermSize=256m -Djava.security.egd=file:/dev/./urandom
-Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -jar
/home/ubuntu/hbase-public/hbase-server/target/surefire/surefirebooter8209143835148437708.jar
/home/ubuntu/hbase-public/hbase-server/target/surefire/surefire6285278142119084921tmp
/home/ubuntu/hbase-public/hbase-server/target/surefire/surefire_514715805625755352301tmp
  6780 738792 4.9 /usr/lib/jvm/jdk1.7.0/bin/java -Xmx2048m -classpath
/home/ubuntu/.m2/apache-maven-3.2.1/boot/plexus-classworlds-2.5.1.jar
-Dclassworlds.conf=/home/ubuntu/.m2/apache-maven-3.2.1/bin/m2.conf
-Dmaven.home=/home/ubuntu/.m2/apache-maven-3.2.1
org.codehaus.plexus.classworlds.launcher.Launcher
-Dsurefire.timeout=9000 -pl hbase-server test
-PrunVerySlowMapReduceTests
 29588 231136 16.4 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_3/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000004/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_3/application_1411955296017_0003/container_1411955296017_0003_01_000004
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000002_0 4
 29575 229832 16.0 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_0/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000005/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_0/application_1411955296017_0003/container_1411955296017_0003_01_000005
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000003_0 5
 29804 227148 15.1 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_1/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000013/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_1/application_1411955296017_0003/container_1411955296017_0003_01_000013
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000011_0 13
 29619 224348 15.3 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_3/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000012/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_1/application_1411955296017_0003/container_1411955296017_0003_01_000012
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000010_0 12
 29586 224152 16.4 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_2/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000010/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_1/application_1411955296017_0003/container_1411955296017_0003_01_000010
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000008_0 10
 29565 221344 15.3 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_2/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000011/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_1/application_1411955296017_0003/container_1411955296017_0003_01_000011
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000009_0 11
 29793 221176 15.1 /usr/lib/jvm/jdk1.7.0/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN
-Xmx128m -Djava.io.tmpdir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-localDir-nm-0_2/usercache/ubuntu/appcache/application_1411955296017_0003/container_1411955296017_0003_01_000006/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/home/ubuntu/hbase-public/hbase-server/target/MiniMRCluster_1484504875/MiniMRCluster_1484504875-logDir-nm-0_2/application_1411955296017_0003/container_1411955296017_0003_01_000006
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
org.apache.hadoop.mapred.YarnChild 127.0.0.1 40361
attempt_1411955296017_0003_m_000004_0 6


On Mon, Sep 29, 2014 at 9:43 AM, Ted Yu <yu...@gmail.com> wrote:
> I got the following message after clicking on the link:
>
> must be logged in
>
> Can you give login information ?
>
> Cheers
>
>
> On Mon, Sep 29, 2014 at 9:40 AM, Alex Newman <po...@gmail.com> wrote:
>>
>> I am currently developing tests that use a mini yarn cluster. Because it
>> is
>> running on circle-ci I need to use the absolute minimum amount of
>> memory.
>>
>> I'm currently setting
>>     conf.setFloat("yarn.
>> nodemanager.vmem-pmem-ratio", 8.0f);
>>     conf.setBoolean("mapreduce.map.speculative", false);
>>     conf.setBoolean("mapreduce.reduce.speculative", false);
>>     conf.setInt("yarn.scheduler.minimum-allocation-mb", 128);
>>     conf.setInt("yarn.scheduler.maximum-allocation-mb", 256);
>>     conf.setInt("yarn.nodemanager.resource.memory-mb", 256);
>>     conf.setInt("mapreduce.map.memory.mb", 128);
>>     conf.set("mapreduce.map.java.opts", "-Xmx128m");
>>
>>     conf.setInt("mapreduce.reduce.memory.mb", 128);
>>     conf.set("mapreduce.reduce.java.opts", "-Xmx128m");
>>     conf.setInt("mapreduce.task.io.sort.mb", 64);
>>
>>     conf.setInt("yarn.app.mapreduce.am.resource.mb", 128);
>>     conf.set("yarn.app.mapreduce.am.command-opts", "-Xmx109m");
>>
>>     conf.setInt("yarn.scheduler.minimum-allocation-vcores", 1);
>>     conf.setInt("yarn.scheduler.maximum-allocation-vcores", 1);
>>     conf.setInt("yarn.nodemanager.resource.cpu-vcores", 1);
>>     conf.setInt("mapreduce.map.cpu.vcore", 1);
>>     conf.setInt("mapreduce.reduce.cpu.vcore", 1);
>>
>>     conf.setInt("mapreduce.tasktracker.map.tasks.maximum", 1);
>>     conf.setInt("mapreduce.tasktracker.reduce.tasks.maximum", 1);
>>
>>     conf.setInt("yarn.scheduler.capacity.root.capacity",1);
>>     conf.setInt("yarn.scheduler.capacity.maximum-applications", 1);
>>
>> conf.setInt("mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob", 1);
>>
>> but I am still seeing many child tasks running
>>
>> https://circle-artifacts.com/gh/OhmData/hbase-public/314/artifacts/2/tmp/memory-usage.txt
>>
>> Any ideas on how to actually limit yarn to one or two children at a time?
>
>

Re: Using Yarn in end to end tests

Posted by Ted Yu <yu...@gmail.com>.
I got the following message after clicking on the link:

must be logged in

Can you give login information ?

Cheers


On Mon, Sep 29, 2014 at 9:40 AM, Alex Newman <po...@gmail.com> wrote:

> I am currently developing tests that use a mini yarn cluster. Because it is
> running on circle-ci I need to use the absolute minimum amount of
> memory.
>
> I'm currently setting
>     conf.setFloat("yarn.
> nodemanager.vmem-pmem-ratio", 8.0f);
>     conf.setBoolean("mapreduce.map.speculative", false);
>     conf.setBoolean("mapreduce.reduce.speculative", false);
>     conf.setInt("yarn.scheduler.minimum-allocation-mb", 128);
>     conf.setInt("yarn.scheduler.maximum-allocation-mb", 256);
>     conf.setInt("yarn.nodemanager.resource.memory-mb", 256);
>     conf.setInt("mapreduce.map.memory.mb", 128);
>     conf.set("mapreduce.map.java.opts", "-Xmx128m");
>
>     conf.setInt("mapreduce.reduce.memory.mb", 128);
>     conf.set("mapreduce.reduce.java.opts", "-Xmx128m");
>     conf.setInt("mapreduce.task.io.sort.mb", 64);
>
>     conf.setInt("yarn.app.mapreduce.am.resource.mb", 128);
>     conf.set("yarn.app.mapreduce.am.command-opts", "-Xmx109m");
>
>     conf.setInt("yarn.scheduler.minimum-allocation-vcores", 1);
>     conf.setInt("yarn.scheduler.maximum-allocation-vcores", 1);
>     conf.setInt("yarn.nodemanager.resource.cpu-vcores", 1);
>     conf.setInt("mapreduce.map.cpu.vcore", 1);
>     conf.setInt("mapreduce.reduce.cpu.vcore", 1);
>
>     conf.setInt("mapreduce.tasktracker.map.tasks.maximum", 1);
>     conf.setInt("mapreduce.tasktracker.reduce.tasks.maximum", 1);
>
>     conf.setInt("yarn.scheduler.capacity.root.capacity",1);
>     conf.setInt("yarn.scheduler.capacity.maximum-applications", 1);
>
> conf.setInt("mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob", 1);
>
> but I am still seeing many child tasks running
>
> https://circle-artifacts.com/gh/OhmData/hbase-public/314/artifacts/2/tmp/memory-usage.txt
>
> Any ideas on how to actually limit yarn to one or two children at a time?
>

Re: Using Yarn in end to end tests

Posted by Ted Yu <yu...@gmail.com>.
I got the following message after clicking on the link:

must be logged in

Can you give login information ?

Cheers


On Mon, Sep 29, 2014 at 9:40 AM, Alex Newman <po...@gmail.com> wrote:

> I am currently developing tests that use a mini yarn cluster. Because it is
> running on circle-ci I need to use the absolute minimum amount of
> memory.
>
> I'm currently setting
>     conf.setFloat("yarn.
> nodemanager.vmem-pmem-ratio", 8.0f);
>     conf.setBoolean("mapreduce.map.speculative", false);
>     conf.setBoolean("mapreduce.reduce.speculative", false);
>     conf.setInt("yarn.scheduler.minimum-allocation-mb", 128);
>     conf.setInt("yarn.scheduler.maximum-allocation-mb", 256);
>     conf.setInt("yarn.nodemanager.resource.memory-mb", 256);
>     conf.setInt("mapreduce.map.memory.mb", 128);
>     conf.set("mapreduce.map.java.opts", "-Xmx128m");
>
>     conf.setInt("mapreduce.reduce.memory.mb", 128);
>     conf.set("mapreduce.reduce.java.opts", "-Xmx128m");
>     conf.setInt("mapreduce.task.io.sort.mb", 64);
>
>     conf.setInt("yarn.app.mapreduce.am.resource.mb", 128);
>     conf.set("yarn.app.mapreduce.am.command-opts", "-Xmx109m");
>
>     conf.setInt("yarn.scheduler.minimum-allocation-vcores", 1);
>     conf.setInt("yarn.scheduler.maximum-allocation-vcores", 1);
>     conf.setInt("yarn.nodemanager.resource.cpu-vcores", 1);
>     conf.setInt("mapreduce.map.cpu.vcore", 1);
>     conf.setInt("mapreduce.reduce.cpu.vcore", 1);
>
>     conf.setInt("mapreduce.tasktracker.map.tasks.maximum", 1);
>     conf.setInt("mapreduce.tasktracker.reduce.tasks.maximum", 1);
>
>     conf.setInt("yarn.scheduler.capacity.root.capacity",1);
>     conf.setInt("yarn.scheduler.capacity.maximum-applications", 1);
>
> conf.setInt("mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob", 1);
>
> but I am still seeing many child tasks running
>
> https://circle-artifacts.com/gh/OhmData/hbase-public/314/artifacts/2/tmp/memory-usage.txt
>
> Any ideas on how to actually limit yarn to one or two children at a time?
>

Re: Using Yarn in end to end tests

Posted by Ted Yu <yu...@gmail.com>.
I got the following message after clicking on the link:

must be logged in

Can you give login information ?

Cheers


On Mon, Sep 29, 2014 at 9:40 AM, Alex Newman <po...@gmail.com> wrote:

> I am currently developing tests that use a mini yarn cluster. Because it is
> running on circle-ci I need to use the absolute minimum amount of
> memory.
>
> I'm currently setting
>     conf.setFloat("yarn.
> nodemanager.vmem-pmem-ratio", 8.0f);
>     conf.setBoolean("mapreduce.map.speculative", false);
>     conf.setBoolean("mapreduce.reduce.speculative", false);
>     conf.setInt("yarn.scheduler.minimum-allocation-mb", 128);
>     conf.setInt("yarn.scheduler.maximum-allocation-mb", 256);
>     conf.setInt("yarn.nodemanager.resource.memory-mb", 256);
>     conf.setInt("mapreduce.map.memory.mb", 128);
>     conf.set("mapreduce.map.java.opts", "-Xmx128m");
>
>     conf.setInt("mapreduce.reduce.memory.mb", 128);
>     conf.set("mapreduce.reduce.java.opts", "-Xmx128m");
>     conf.setInt("mapreduce.task.io.sort.mb", 64);
>
>     conf.setInt("yarn.app.mapreduce.am.resource.mb", 128);
>     conf.set("yarn.app.mapreduce.am.command-opts", "-Xmx109m");
>
>     conf.setInt("yarn.scheduler.minimum-allocation-vcores", 1);
>     conf.setInt("yarn.scheduler.maximum-allocation-vcores", 1);
>     conf.setInt("yarn.nodemanager.resource.cpu-vcores", 1);
>     conf.setInt("mapreduce.map.cpu.vcore", 1);
>     conf.setInt("mapreduce.reduce.cpu.vcore", 1);
>
>     conf.setInt("mapreduce.tasktracker.map.tasks.maximum", 1);
>     conf.setInt("mapreduce.tasktracker.reduce.tasks.maximum", 1);
>
>     conf.setInt("yarn.scheduler.capacity.root.capacity",1);
>     conf.setInt("yarn.scheduler.capacity.maximum-applications", 1);
>
> conf.setInt("mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob", 1);
>
> but I am still seeing many child tasks running
>
> https://circle-artifacts.com/gh/OhmData/hbase-public/314/artifacts/2/tmp/memory-usage.txt
>
> Any ideas on how to actually limit yarn to one or two children at a time?
>

Re: Using Yarn in end to end tests

Posted by Ted Yu <yu...@gmail.com>.
I got the following message after clicking on the link:

must be logged in

Can you give login information ?

Cheers


On Mon, Sep 29, 2014 at 9:40 AM, Alex Newman <po...@gmail.com> wrote:

> I am currently developing tests that use a mini yarn cluster. Because it is
> running on circle-ci I need to use the absolute minimum amount of
> memory.
>
> I'm currently setting
>     conf.setFloat("yarn.
> nodemanager.vmem-pmem-ratio", 8.0f);
>     conf.setBoolean("mapreduce.map.speculative", false);
>     conf.setBoolean("mapreduce.reduce.speculative", false);
>     conf.setInt("yarn.scheduler.minimum-allocation-mb", 128);
>     conf.setInt("yarn.scheduler.maximum-allocation-mb", 256);
>     conf.setInt("yarn.nodemanager.resource.memory-mb", 256);
>     conf.setInt("mapreduce.map.memory.mb", 128);
>     conf.set("mapreduce.map.java.opts", "-Xmx128m");
>
>     conf.setInt("mapreduce.reduce.memory.mb", 128);
>     conf.set("mapreduce.reduce.java.opts", "-Xmx128m");
>     conf.setInt("mapreduce.task.io.sort.mb", 64);
>
>     conf.setInt("yarn.app.mapreduce.am.resource.mb", 128);
>     conf.set("yarn.app.mapreduce.am.command-opts", "-Xmx109m");
>
>     conf.setInt("yarn.scheduler.minimum-allocation-vcores", 1);
>     conf.setInt("yarn.scheduler.maximum-allocation-vcores", 1);
>     conf.setInt("yarn.nodemanager.resource.cpu-vcores", 1);
>     conf.setInt("mapreduce.map.cpu.vcore", 1);
>     conf.setInt("mapreduce.reduce.cpu.vcore", 1);
>
>     conf.setInt("mapreduce.tasktracker.map.tasks.maximum", 1);
>     conf.setInt("mapreduce.tasktracker.reduce.tasks.maximum", 1);
>
>     conf.setInt("yarn.scheduler.capacity.root.capacity",1);
>     conf.setInt("yarn.scheduler.capacity.maximum-applications", 1);
>
> conf.setInt("mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob", 1);
>
> but I am still seeing many child tasks running
>
> https://circle-artifacts.com/gh/OhmData/hbase-public/314/artifacts/2/tmp/memory-usage.txt
>
> Any ideas on how to actually limit yarn to one or two children at a time?
>