You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-dev@hadoop.apache.org by Alex Newman <po...@gmail.com> on 2014/09/29 04:05:45 UTC

Question

I am currently using tests with a mini yarn cluster. Because it is
running on circle-ci I need to use the absolute minimum amount of
memory.

I'm currently setting
    conf.setFloat("yarn.nodemanager.vmem-pmem-ratio", 8.0f);
    conf.setBoolean("mapreduce.map.speculative", false);
    conf.setBoolean("mapreduce.reduce.speculative", false);
    conf.setInt("yarn.scheduler.minimum-allocation-mb", 128);
    conf.setInt("yarn.scheduler.maximum-allocation-mb", 256);
    conf.setInt("yarn.nodemanager.resource.memory-mb", 256);
    conf.setInt("mapreduce.map.memory.mb", 128);
    conf.set("mapreduce.map.java.opts", "-Xmx128m");

    conf.setInt("mapreduce.reduce.memory.mb", 128);
    conf.set("mapreduce.reduce.java.opts", "-Xmx128m");
    conf.setInt("mapreduce.task.io.sort.mb", 64);

    conf.setInt("yarn.app.mapreduce.am.resource.mb", 128);
    conf.set("yarn.app.mapreduce.am.command-opts", "-Xmx109m");

    conf.setInt("yarn.scheduler.minimum-allocation-vcores", 1);
    conf.setInt("yarn.scheduler.maximum-allocation-vcores", 1);
    conf.setInt("yarn.nodemanager.resource.cpu-vcores", 1);
    conf.setInt("mapreduce.map.cpu.vcore", 1);
    conf.setInt("mapreduce.reduce.cpu.vcore", 1);

    conf.setInt("mapreduce.tasktracker.map.tasks.maximum", 1);
    conf.setInt("mapreduce.tasktracker.reduce.tasks.maximum", 1);

    conf.setInt("yarn.scheduler.capacity.root.capacity",1);
    conf.setInt("yarn.scheduler.capacity.maximum-applications", 1);
    conf.setInt("mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob", 1);

but I am still seeing many child tasks running
https://circle-artifacts.com/gh/OhmData/hbase-public/314/artifacts/2/tmp/memory-usage.txt

Any ideas on how to actually limit yarn to one task?