You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by peter 2 <re...@gmail.com> on 2014/10/17 20:24:44 UTC

Dynamically set map / reducer memory

HI Guys,
I am trying to run a few MR jobs in a succession, some of the jobs don't 
need that much memory and others do. I want to be able to tell hadoop 
how much memory should be allocated  for the mappers of each job.
I know how to increase the memory for a mapper JVM, through the mapred xml.
I tried manually setting the mapreduce.reduce.java.opts= 
-Xmx<someNumber>m , but wasn't picked up by the mapper jvm, the global 
setting was always been picked up .

In summation
Job 1 - Mappers need only 250 Mg of Ram
Job2 - Mapper
            Reducer need around - 2Gb

I don't want to be able to set those restrictions prior to submitting 
the job to my hadoop cluster.

Re: Dynamically set map / reducer memory

Posted by Girish Lingappa <gl...@pivotal.io>.
Peter

If you are using oozie to launch the MR jobs you can specify the memory
requirements in the workflow action specific to each job, in the workflow
xml you are using to launch the job. If you are writing your own driver
program to launch the jobs you can still set these parameters in the job
configuration you are using to launch the job.
 In the case where you modified mapred-site.xml to set your memory
requirements did you change that on the client machine where you are
launching the job?
 Please share more details on the setup and the way you are launching the
jobs so we can better understand the problem you are facing

Girish

On Fri, Oct 17, 2014 at 11:24 AM, peter 2 <re...@gmail.com> wrote:

>  HI Guys,
> I am trying to run a few MR jobs in a succession, some of the jobs don't
> need that much memory and others do. I want to be able to tell hadoop how
> much memory should be allocated  for the mappers of each job.
> I know how to increase the memory for a mapper JVM, through the mapred
> xml.
> I tried manually setting the  mapreduce.reduce.java.opts = -Xmx<someNumber>m
> , but wasn't picked up by the mapper jvm, the global setting was always
> been picked up .
>
> In summation
> Job 1 - Mappers need only 250 Mg of Ram
> Job2 - Mapper
>            Reducer need around - 2Gb
>
> I don't want to be able to set those restrictions prior to submitting the
> job to my hadoop cluster.
>

Re: Dynamically set map / reducer memory

Posted by Girish Lingappa <gl...@pivotal.io>.
Peter

If you are using oozie to launch the MR jobs you can specify the memory
requirements in the workflow action specific to each job, in the workflow
xml you are using to launch the job. If you are writing your own driver
program to launch the jobs you can still set these parameters in the job
configuration you are using to launch the job.
 In the case where you modified mapred-site.xml to set your memory
requirements did you change that on the client machine where you are
launching the job?
 Please share more details on the setup and the way you are launching the
jobs so we can better understand the problem you are facing

Girish

On Fri, Oct 17, 2014 at 11:24 AM, peter 2 <re...@gmail.com> wrote:

>  HI Guys,
> I am trying to run a few MR jobs in a succession, some of the jobs don't
> need that much memory and others do. I want to be able to tell hadoop how
> much memory should be allocated  for the mappers of each job.
> I know how to increase the memory for a mapper JVM, through the mapred
> xml.
> I tried manually setting the  mapreduce.reduce.java.opts = -Xmx<someNumber>m
> , but wasn't picked up by the mapper jvm, the global setting was always
> been picked up .
>
> In summation
> Job 1 - Mappers need only 250 Mg of Ram
> Job2 - Mapper
>            Reducer need around - 2Gb
>
> I don't want to be able to set those restrictions prior to submitting the
> job to my hadoop cluster.
>

Re: Dynamically set map / reducer memory

Posted by Girish Lingappa <gl...@pivotal.io>.
Peter

If you are using oozie to launch the MR jobs you can specify the memory
requirements in the workflow action specific to each job, in the workflow
xml you are using to launch the job. If you are writing your own driver
program to launch the jobs you can still set these parameters in the job
configuration you are using to launch the job.
 In the case where you modified mapred-site.xml to set your memory
requirements did you change that on the client machine where you are
launching the job?
 Please share more details on the setup and the way you are launching the
jobs so we can better understand the problem you are facing

Girish

On Fri, Oct 17, 2014 at 11:24 AM, peter 2 <re...@gmail.com> wrote:

>  HI Guys,
> I am trying to run a few MR jobs in a succession, some of the jobs don't
> need that much memory and others do. I want to be able to tell hadoop how
> much memory should be allocated  for the mappers of each job.
> I know how to increase the memory for a mapper JVM, through the mapred
> xml.
> I tried manually setting the  mapreduce.reduce.java.opts = -Xmx<someNumber>m
> , but wasn't picked up by the mapper jvm, the global setting was always
> been picked up .
>
> In summation
> Job 1 - Mappers need only 250 Mg of Ram
> Job2 - Mapper
>            Reducer need around - 2Gb
>
> I don't want to be able to set those restrictions prior to submitting the
> job to my hadoop cluster.
>

Re: Dynamically set map / reducer memory

Posted by Girish Lingappa <gl...@pivotal.io>.
Peter

If you are using oozie to launch the MR jobs you can specify the memory
requirements in the workflow action specific to each job, in the workflow
xml you are using to launch the job. If you are writing your own driver
program to launch the jobs you can still set these parameters in the job
configuration you are using to launch the job.
 In the case where you modified mapred-site.xml to set your memory
requirements did you change that on the client machine where you are
launching the job?
 Please share more details on the setup and the way you are launching the
jobs so we can better understand the problem you are facing

Girish

On Fri, Oct 17, 2014 at 11:24 AM, peter 2 <re...@gmail.com> wrote:

>  HI Guys,
> I am trying to run a few MR jobs in a succession, some of the jobs don't
> need that much memory and others do. I want to be able to tell hadoop how
> much memory should be allocated  for the mappers of each job.
> I know how to increase the memory for a mapper JVM, through the mapred
> xml.
> I tried manually setting the  mapreduce.reduce.java.opts = -Xmx<someNumber>m
> , but wasn't picked up by the mapper jvm, the global setting was always
> been picked up .
>
> In summation
> Job 1 - Mappers need only 250 Mg of Ram
> Job2 - Mapper
>            Reducer need around - 2Gb
>
> I don't want to be able to set those restrictions prior to submitting the
> job to my hadoop cluster.
>