You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Manu Zhang <ow...@gmail.com> on 2013/10/23 04:09:06 UTC

map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Hi,

I've been running Terasort on Hadoop-2.0.4.

Every time there is s a small number of Map failures (like 4 or 5) because
of container's running beyond virtual memory limit.

I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
TaskAttempt goes fine while the values of those failed maps are the default
1024MB.

My question is thus, why a small number of container's memory values are
set to default rather than that of user-configured ?

Any thoughts ?

Thanks,
Manu Zhang

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Manu Zhang <ow...@gmail.com>.
My mapreduce.map.java.opts is 1024MB

Thanks,
Manu


On Thu, Oct 24, 2013 at 3:11 PM, Tsuyoshi OZAWA <oz...@gmail.com>wrote:

> Hi,
>
> How about checking the value of mapreduce.map.java.opts? Are your JVMs
> launched with assumed heap memory?
>
> On Thu, Oct 24, 2013 at 11:31 AM, Manu Zhang <ow...@gmail.com>
> wrote:
> > Just confirmed the problem still existed even the "mapred-site.xml"s on
> all
> > nodes have the same configuration (mapreduce.map.memory.mb = 2560).
> >
> > Any more thoughts ?
> >
> > Thanks,
> > Manu
> >
> >
> > On Thu, Oct 24, 2013 at 8:59 AM, Manu Zhang <ow...@gmail.com>
> wrote:
> >>
> >> Thanks Ravi.
> >>
> >> I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
> >> sounds weird to me should they read configuration from those
> mapred-site.xml
> >> since it's the client who applies for the resource. I have another
> >> mapred-site.xml in the directory where I run my job. I suppose my job
> should
> >> read conf from that mapred-site.xml. Please correct me if I am mistaken.
> >>
> >> Also, not always the same nodes. The number of failures is random, too.
> >>
> >> Anyway, I will have my settings in all the nodes' mapred-site.xml and
> see
> >> if the problem goes away.
> >>
> >> Manu
> >>
> >>
> >> On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <ra...@ymail.com>
> wrote:
> >>>
> >>> Manu!
> >>>
> >>> This should not be the case. All tasks should have the configuration
> >>> values you specified propagated to them. Are you sure your setup is
> correct?
> >>> Are they always the same nodes which run with 1024Mb? Perhaps you have
> >>> mapred-site.xml on those nodes?
> >>>
> >>> HTH
> >>> Ravi
> >>>
> >>>
> >>> On Tuesday, October 22, 2013 9:09 PM, Manu Zhang
> >>> <ow...@gmail.com> wrote:
> >>> Hi,
> >>>
> >>> I've been running Terasort on Hadoop-2.0.4.
> >>>
> >>> Every time there is s a small number of Map failures (like 4 or 5)
> >>> because of container's running beyond virtual memory limit.
> >>>
> >>> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
> >>> TaskAttempt goes fine while the values of those failed maps are the
> default
> >>> 1024MB.
> >>>
> >>> My question is thus, why a small number of container's memory values
> are
> >>> set to default rather than that of user-configured ?
> >>>
> >>> Any thoughts ?
> >>>
> >>> Thanks,
> >>> Manu Zhang
> >>>
> >>>
> >>>
> >>
> >
>
>
>
> --
> - Tsuyoshi
>

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Manu Zhang <ow...@gmail.com>.
My mapreduce.map.java.opts is 1024MB

Thanks,
Manu


On Thu, Oct 24, 2013 at 3:11 PM, Tsuyoshi OZAWA <oz...@gmail.com>wrote:

> Hi,
>
> How about checking the value of mapreduce.map.java.opts? Are your JVMs
> launched with assumed heap memory?
>
> On Thu, Oct 24, 2013 at 11:31 AM, Manu Zhang <ow...@gmail.com>
> wrote:
> > Just confirmed the problem still existed even the "mapred-site.xml"s on
> all
> > nodes have the same configuration (mapreduce.map.memory.mb = 2560).
> >
> > Any more thoughts ?
> >
> > Thanks,
> > Manu
> >
> >
> > On Thu, Oct 24, 2013 at 8:59 AM, Manu Zhang <ow...@gmail.com>
> wrote:
> >>
> >> Thanks Ravi.
> >>
> >> I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
> >> sounds weird to me should they read configuration from those
> mapred-site.xml
> >> since it's the client who applies for the resource. I have another
> >> mapred-site.xml in the directory where I run my job. I suppose my job
> should
> >> read conf from that mapred-site.xml. Please correct me if I am mistaken.
> >>
> >> Also, not always the same nodes. The number of failures is random, too.
> >>
> >> Anyway, I will have my settings in all the nodes' mapred-site.xml and
> see
> >> if the problem goes away.
> >>
> >> Manu
> >>
> >>
> >> On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <ra...@ymail.com>
> wrote:
> >>>
> >>> Manu!
> >>>
> >>> This should not be the case. All tasks should have the configuration
> >>> values you specified propagated to them. Are you sure your setup is
> correct?
> >>> Are they always the same nodes which run with 1024Mb? Perhaps you have
> >>> mapred-site.xml on those nodes?
> >>>
> >>> HTH
> >>> Ravi
> >>>
> >>>
> >>> On Tuesday, October 22, 2013 9:09 PM, Manu Zhang
> >>> <ow...@gmail.com> wrote:
> >>> Hi,
> >>>
> >>> I've been running Terasort on Hadoop-2.0.4.
> >>>
> >>> Every time there is s a small number of Map failures (like 4 or 5)
> >>> because of container's running beyond virtual memory limit.
> >>>
> >>> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
> >>> TaskAttempt goes fine while the values of those failed maps are the
> default
> >>> 1024MB.
> >>>
> >>> My question is thus, why a small number of container's memory values
> are
> >>> set to default rather than that of user-configured ?
> >>>
> >>> Any thoughts ?
> >>>
> >>> Thanks,
> >>> Manu Zhang
> >>>
> >>>
> >>>
> >>
> >
>
>
>
> --
> - Tsuyoshi
>

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Manu Zhang <ow...@gmail.com>.
My mapreduce.map.java.opts is 1024MB

Thanks,
Manu


On Thu, Oct 24, 2013 at 3:11 PM, Tsuyoshi OZAWA <oz...@gmail.com>wrote:

> Hi,
>
> How about checking the value of mapreduce.map.java.opts? Are your JVMs
> launched with assumed heap memory?
>
> On Thu, Oct 24, 2013 at 11:31 AM, Manu Zhang <ow...@gmail.com>
> wrote:
> > Just confirmed the problem still existed even the "mapred-site.xml"s on
> all
> > nodes have the same configuration (mapreduce.map.memory.mb = 2560).
> >
> > Any more thoughts ?
> >
> > Thanks,
> > Manu
> >
> >
> > On Thu, Oct 24, 2013 at 8:59 AM, Manu Zhang <ow...@gmail.com>
> wrote:
> >>
> >> Thanks Ravi.
> >>
> >> I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
> >> sounds weird to me should they read configuration from those
> mapred-site.xml
> >> since it's the client who applies for the resource. I have another
> >> mapred-site.xml in the directory where I run my job. I suppose my job
> should
> >> read conf from that mapred-site.xml. Please correct me if I am mistaken.
> >>
> >> Also, not always the same nodes. The number of failures is random, too.
> >>
> >> Anyway, I will have my settings in all the nodes' mapred-site.xml and
> see
> >> if the problem goes away.
> >>
> >> Manu
> >>
> >>
> >> On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <ra...@ymail.com>
> wrote:
> >>>
> >>> Manu!
> >>>
> >>> This should not be the case. All tasks should have the configuration
> >>> values you specified propagated to them. Are you sure your setup is
> correct?
> >>> Are they always the same nodes which run with 1024Mb? Perhaps you have
> >>> mapred-site.xml on those nodes?
> >>>
> >>> HTH
> >>> Ravi
> >>>
> >>>
> >>> On Tuesday, October 22, 2013 9:09 PM, Manu Zhang
> >>> <ow...@gmail.com> wrote:
> >>> Hi,
> >>>
> >>> I've been running Terasort on Hadoop-2.0.4.
> >>>
> >>> Every time there is s a small number of Map failures (like 4 or 5)
> >>> because of container's running beyond virtual memory limit.
> >>>
> >>> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
> >>> TaskAttempt goes fine while the values of those failed maps are the
> default
> >>> 1024MB.
> >>>
> >>> My question is thus, why a small number of container's memory values
> are
> >>> set to default rather than that of user-configured ?
> >>>
> >>> Any thoughts ?
> >>>
> >>> Thanks,
> >>> Manu Zhang
> >>>
> >>>
> >>>
> >>
> >
>
>
>
> --
> - Tsuyoshi
>

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Manu Zhang <ow...@gmail.com>.
My mapreduce.map.java.opts is 1024MB

Thanks,
Manu


On Thu, Oct 24, 2013 at 3:11 PM, Tsuyoshi OZAWA <oz...@gmail.com>wrote:

> Hi,
>
> How about checking the value of mapreduce.map.java.opts? Are your JVMs
> launched with assumed heap memory?
>
> On Thu, Oct 24, 2013 at 11:31 AM, Manu Zhang <ow...@gmail.com>
> wrote:
> > Just confirmed the problem still existed even the "mapred-site.xml"s on
> all
> > nodes have the same configuration (mapreduce.map.memory.mb = 2560).
> >
> > Any more thoughts ?
> >
> > Thanks,
> > Manu
> >
> >
> > On Thu, Oct 24, 2013 at 8:59 AM, Manu Zhang <ow...@gmail.com>
> wrote:
> >>
> >> Thanks Ravi.
> >>
> >> I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
> >> sounds weird to me should they read configuration from those
> mapred-site.xml
> >> since it's the client who applies for the resource. I have another
> >> mapred-site.xml in the directory where I run my job. I suppose my job
> should
> >> read conf from that mapred-site.xml. Please correct me if I am mistaken.
> >>
> >> Also, not always the same nodes. The number of failures is random, too.
> >>
> >> Anyway, I will have my settings in all the nodes' mapred-site.xml and
> see
> >> if the problem goes away.
> >>
> >> Manu
> >>
> >>
> >> On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <ra...@ymail.com>
> wrote:
> >>>
> >>> Manu!
> >>>
> >>> This should not be the case. All tasks should have the configuration
> >>> values you specified propagated to them. Are you sure your setup is
> correct?
> >>> Are they always the same nodes which run with 1024Mb? Perhaps you have
> >>> mapred-site.xml on those nodes?
> >>>
> >>> HTH
> >>> Ravi
> >>>
> >>>
> >>> On Tuesday, October 22, 2013 9:09 PM, Manu Zhang
> >>> <ow...@gmail.com> wrote:
> >>> Hi,
> >>>
> >>> I've been running Terasort on Hadoop-2.0.4.
> >>>
> >>> Every time there is s a small number of Map failures (like 4 or 5)
> >>> because of container's running beyond virtual memory limit.
> >>>
> >>> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
> >>> TaskAttempt goes fine while the values of those failed maps are the
> default
> >>> 1024MB.
> >>>
> >>> My question is thus, why a small number of container's memory values
> are
> >>> set to default rather than that of user-configured ?
> >>>
> >>> Any thoughts ?
> >>>
> >>> Thanks,
> >>> Manu Zhang
> >>>
> >>>
> >>>
> >>
> >
>
>
>
> --
> - Tsuyoshi
>

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Tsuyoshi OZAWA <oz...@gmail.com>.
Hi,

How about checking the value of mapreduce.map.java.opts? Are your JVMs
launched with assumed heap memory?

On Thu, Oct 24, 2013 at 11:31 AM, Manu Zhang <ow...@gmail.com> wrote:
> Just confirmed the problem still existed even the "mapred-site.xml"s on all
> nodes have the same configuration (mapreduce.map.memory.mb = 2560).
>
> Any more thoughts ?
>
> Thanks,
> Manu
>
>
> On Thu, Oct 24, 2013 at 8:59 AM, Manu Zhang <ow...@gmail.com> wrote:
>>
>> Thanks Ravi.
>>
>> I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
>> sounds weird to me should they read configuration from those mapred-site.xml
>> since it's the client who applies for the resource. I have another
>> mapred-site.xml in the directory where I run my job. I suppose my job should
>> read conf from that mapred-site.xml. Please correct me if I am mistaken.
>>
>> Also, not always the same nodes. The number of failures is random, too.
>>
>> Anyway, I will have my settings in all the nodes' mapred-site.xml and see
>> if the problem goes away.
>>
>> Manu
>>
>>
>> On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <ra...@ymail.com> wrote:
>>>
>>> Manu!
>>>
>>> This should not be the case. All tasks should have the configuration
>>> values you specified propagated to them. Are you sure your setup is correct?
>>> Are they always the same nodes which run with 1024Mb? Perhaps you have
>>> mapred-site.xml on those nodes?
>>>
>>> HTH
>>> Ravi
>>>
>>>
>>> On Tuesday, October 22, 2013 9:09 PM, Manu Zhang
>>> <ow...@gmail.com> wrote:
>>> Hi,
>>>
>>> I've been running Terasort on Hadoop-2.0.4.
>>>
>>> Every time there is s a small number of Map failures (like 4 or 5)
>>> because of container's running beyond virtual memory limit.
>>>
>>> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
>>> TaskAttempt goes fine while the values of those failed maps are the default
>>> 1024MB.
>>>
>>> My question is thus, why a small number of container's memory values are
>>> set to default rather than that of user-configured ?
>>>
>>> Any thoughts ?
>>>
>>> Thanks,
>>> Manu Zhang
>>>
>>>
>>>
>>
>



-- 
- Tsuyoshi

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Tsuyoshi OZAWA <oz...@gmail.com>.
Hi,

How about checking the value of mapreduce.map.java.opts? Are your JVMs
launched with assumed heap memory?

On Thu, Oct 24, 2013 at 11:31 AM, Manu Zhang <ow...@gmail.com> wrote:
> Just confirmed the problem still existed even the "mapred-site.xml"s on all
> nodes have the same configuration (mapreduce.map.memory.mb = 2560).
>
> Any more thoughts ?
>
> Thanks,
> Manu
>
>
> On Thu, Oct 24, 2013 at 8:59 AM, Manu Zhang <ow...@gmail.com> wrote:
>>
>> Thanks Ravi.
>>
>> I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
>> sounds weird to me should they read configuration from those mapred-site.xml
>> since it's the client who applies for the resource. I have another
>> mapred-site.xml in the directory where I run my job. I suppose my job should
>> read conf from that mapred-site.xml. Please correct me if I am mistaken.
>>
>> Also, not always the same nodes. The number of failures is random, too.
>>
>> Anyway, I will have my settings in all the nodes' mapred-site.xml and see
>> if the problem goes away.
>>
>> Manu
>>
>>
>> On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <ra...@ymail.com> wrote:
>>>
>>> Manu!
>>>
>>> This should not be the case. All tasks should have the configuration
>>> values you specified propagated to them. Are you sure your setup is correct?
>>> Are they always the same nodes which run with 1024Mb? Perhaps you have
>>> mapred-site.xml on those nodes?
>>>
>>> HTH
>>> Ravi
>>>
>>>
>>> On Tuesday, October 22, 2013 9:09 PM, Manu Zhang
>>> <ow...@gmail.com> wrote:
>>> Hi,
>>>
>>> I've been running Terasort on Hadoop-2.0.4.
>>>
>>> Every time there is s a small number of Map failures (like 4 or 5)
>>> because of container's running beyond virtual memory limit.
>>>
>>> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
>>> TaskAttempt goes fine while the values of those failed maps are the default
>>> 1024MB.
>>>
>>> My question is thus, why a small number of container's memory values are
>>> set to default rather than that of user-configured ?
>>>
>>> Any thoughts ?
>>>
>>> Thanks,
>>> Manu Zhang
>>>
>>>
>>>
>>
>



-- 
- Tsuyoshi

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Tsuyoshi OZAWA <oz...@gmail.com>.
Hi,

How about checking the value of mapreduce.map.java.opts? Are your JVMs
launched with assumed heap memory?

On Thu, Oct 24, 2013 at 11:31 AM, Manu Zhang <ow...@gmail.com> wrote:
> Just confirmed the problem still existed even the "mapred-site.xml"s on all
> nodes have the same configuration (mapreduce.map.memory.mb = 2560).
>
> Any more thoughts ?
>
> Thanks,
> Manu
>
>
> On Thu, Oct 24, 2013 at 8:59 AM, Manu Zhang <ow...@gmail.com> wrote:
>>
>> Thanks Ravi.
>>
>> I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
>> sounds weird to me should they read configuration from those mapred-site.xml
>> since it's the client who applies for the resource. I have another
>> mapred-site.xml in the directory where I run my job. I suppose my job should
>> read conf from that mapred-site.xml. Please correct me if I am mistaken.
>>
>> Also, not always the same nodes. The number of failures is random, too.
>>
>> Anyway, I will have my settings in all the nodes' mapred-site.xml and see
>> if the problem goes away.
>>
>> Manu
>>
>>
>> On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <ra...@ymail.com> wrote:
>>>
>>> Manu!
>>>
>>> This should not be the case. All tasks should have the configuration
>>> values you specified propagated to them. Are you sure your setup is correct?
>>> Are they always the same nodes which run with 1024Mb? Perhaps you have
>>> mapred-site.xml on those nodes?
>>>
>>> HTH
>>> Ravi
>>>
>>>
>>> On Tuesday, October 22, 2013 9:09 PM, Manu Zhang
>>> <ow...@gmail.com> wrote:
>>> Hi,
>>>
>>> I've been running Terasort on Hadoop-2.0.4.
>>>
>>> Every time there is s a small number of Map failures (like 4 or 5)
>>> because of container's running beyond virtual memory limit.
>>>
>>> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
>>> TaskAttempt goes fine while the values of those failed maps are the default
>>> 1024MB.
>>>
>>> My question is thus, why a small number of container's memory values are
>>> set to default rather than that of user-configured ?
>>>
>>> Any thoughts ?
>>>
>>> Thanks,
>>> Manu Zhang
>>>
>>>
>>>
>>
>



-- 
- Tsuyoshi

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Tsuyoshi OZAWA <oz...@gmail.com>.
Hi,

How about checking the value of mapreduce.map.java.opts? Are your JVMs
launched with assumed heap memory?

On Thu, Oct 24, 2013 at 11:31 AM, Manu Zhang <ow...@gmail.com> wrote:
> Just confirmed the problem still existed even the "mapred-site.xml"s on all
> nodes have the same configuration (mapreduce.map.memory.mb = 2560).
>
> Any more thoughts ?
>
> Thanks,
> Manu
>
>
> On Thu, Oct 24, 2013 at 8:59 AM, Manu Zhang <ow...@gmail.com> wrote:
>>
>> Thanks Ravi.
>>
>> I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
>> sounds weird to me should they read configuration from those mapred-site.xml
>> since it's the client who applies for the resource. I have another
>> mapred-site.xml in the directory where I run my job. I suppose my job should
>> read conf from that mapred-site.xml. Please correct me if I am mistaken.
>>
>> Also, not always the same nodes. The number of failures is random, too.
>>
>> Anyway, I will have my settings in all the nodes' mapred-site.xml and see
>> if the problem goes away.
>>
>> Manu
>>
>>
>> On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <ra...@ymail.com> wrote:
>>>
>>> Manu!
>>>
>>> This should not be the case. All tasks should have the configuration
>>> values you specified propagated to them. Are you sure your setup is correct?
>>> Are they always the same nodes which run with 1024Mb? Perhaps you have
>>> mapred-site.xml on those nodes?
>>>
>>> HTH
>>> Ravi
>>>
>>>
>>> On Tuesday, October 22, 2013 9:09 PM, Manu Zhang
>>> <ow...@gmail.com> wrote:
>>> Hi,
>>>
>>> I've been running Terasort on Hadoop-2.0.4.
>>>
>>> Every time there is s a small number of Map failures (like 4 or 5)
>>> because of container's running beyond virtual memory limit.
>>>
>>> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
>>> TaskAttempt goes fine while the values of those failed maps are the default
>>> 1024MB.
>>>
>>> My question is thus, why a small number of container's memory values are
>>> set to default rather than that of user-configured ?
>>>
>>> Any thoughts ?
>>>
>>> Thanks,
>>> Manu Zhang
>>>
>>>
>>>
>>
>



-- 
- Tsuyoshi

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Manu Zhang <ow...@gmail.com>.
Just confirmed the problem still existed even the "mapred-site.xml"s on all
nodes have the same configuration (mapreduce.map.memory.mb = 2560).

Any more thoughts ?

Thanks,
Manu


On Thu, Oct 24, 2013 at 8:59 AM, Manu Zhang <ow...@gmail.com> wrote:

> Thanks Ravi.
>
> I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
> sounds weird to me should they read configuration from those
> mapred-site.xml since it's the client who applies for the resource. I have
> another mapred-site.xml in the directory where I run my job. I suppose my
> job should read conf from that mapred-site.xml. Please correct me if I am
> mistaken.
>
> Also, not always the same nodes. The number of failures is random, too.
>
> Anyway, I will have my settings in all the nodes' mapred-site.xml and see
> if the problem goes away.
>
> Manu
>
>
> On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <ra...@ymail.com> wrote:
>
>> Manu!
>>
>> This should not be the case. All tasks should have the configuration
>> values you specified propagated to them. Are you sure your setup is
>> correct? Are they always the same nodes which run with 1024Mb? Perhaps you
>> have mapred-site.xml on those nodes?
>>
>> HTH
>> Ravi
>>
>>
>>   On Tuesday, October 22, 2013 9:09 PM, Manu Zhang <
>> owenzhang1990@gmail.com> wrote:
>>  Hi,
>>
>> I've been running Terasort on Hadoop-2.0.4.
>>
>> Every time there is s a small number of Map failures (like 4 or 5)
>> because of container's running beyond virtual memory limit.
>>
>> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
>> TaskAttempt goes fine while the values of those failed maps are the default
>> 1024MB.
>>
>> My question is thus, why a small number of container's memory values are
>> set to default rather than that of user-configured ?
>>
>> Any thoughts ?
>>
>> Thanks,
>> Manu Zhang
>>
>>
>>
>>
>

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Manu Zhang <ow...@gmail.com>.
Just confirmed the problem still existed even the "mapred-site.xml"s on all
nodes have the same configuration (mapreduce.map.memory.mb = 2560).

Any more thoughts ?

Thanks,
Manu


On Thu, Oct 24, 2013 at 8:59 AM, Manu Zhang <ow...@gmail.com> wrote:

> Thanks Ravi.
>
> I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
> sounds weird to me should they read configuration from those
> mapred-site.xml since it's the client who applies for the resource. I have
> another mapred-site.xml in the directory where I run my job. I suppose my
> job should read conf from that mapred-site.xml. Please correct me if I am
> mistaken.
>
> Also, not always the same nodes. The number of failures is random, too.
>
> Anyway, I will have my settings in all the nodes' mapred-site.xml and see
> if the problem goes away.
>
> Manu
>
>
> On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <ra...@ymail.com> wrote:
>
>> Manu!
>>
>> This should not be the case. All tasks should have the configuration
>> values you specified propagated to them. Are you sure your setup is
>> correct? Are they always the same nodes which run with 1024Mb? Perhaps you
>> have mapred-site.xml on those nodes?
>>
>> HTH
>> Ravi
>>
>>
>>   On Tuesday, October 22, 2013 9:09 PM, Manu Zhang <
>> owenzhang1990@gmail.com> wrote:
>>  Hi,
>>
>> I've been running Terasort on Hadoop-2.0.4.
>>
>> Every time there is s a small number of Map failures (like 4 or 5)
>> because of container's running beyond virtual memory limit.
>>
>> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
>> TaskAttempt goes fine while the values of those failed maps are the default
>> 1024MB.
>>
>> My question is thus, why a small number of container's memory values are
>> set to default rather than that of user-configured ?
>>
>> Any thoughts ?
>>
>> Thanks,
>> Manu Zhang
>>
>>
>>
>>
>

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Manu Zhang <ow...@gmail.com>.
Just confirmed the problem still existed even the "mapred-site.xml"s on all
nodes have the same configuration (mapreduce.map.memory.mb = 2560).

Any more thoughts ?

Thanks,
Manu


On Thu, Oct 24, 2013 at 8:59 AM, Manu Zhang <ow...@gmail.com> wrote:

> Thanks Ravi.
>
> I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
> sounds weird to me should they read configuration from those
> mapred-site.xml since it's the client who applies for the resource. I have
> another mapred-site.xml in the directory where I run my job. I suppose my
> job should read conf from that mapred-site.xml. Please correct me if I am
> mistaken.
>
> Also, not always the same nodes. The number of failures is random, too.
>
> Anyway, I will have my settings in all the nodes' mapred-site.xml and see
> if the problem goes away.
>
> Manu
>
>
> On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <ra...@ymail.com> wrote:
>
>> Manu!
>>
>> This should not be the case. All tasks should have the configuration
>> values you specified propagated to them. Are you sure your setup is
>> correct? Are they always the same nodes which run with 1024Mb? Perhaps you
>> have mapred-site.xml on those nodes?
>>
>> HTH
>> Ravi
>>
>>
>>   On Tuesday, October 22, 2013 9:09 PM, Manu Zhang <
>> owenzhang1990@gmail.com> wrote:
>>  Hi,
>>
>> I've been running Terasort on Hadoop-2.0.4.
>>
>> Every time there is s a small number of Map failures (like 4 or 5)
>> because of container's running beyond virtual memory limit.
>>
>> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
>> TaskAttempt goes fine while the values of those failed maps are the default
>> 1024MB.
>>
>> My question is thus, why a small number of container's memory values are
>> set to default rather than that of user-configured ?
>>
>> Any thoughts ?
>>
>> Thanks,
>> Manu Zhang
>>
>>
>>
>>
>

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Manu Zhang <ow...@gmail.com>.
Just confirmed the problem still existed even the "mapred-site.xml"s on all
nodes have the same configuration (mapreduce.map.memory.mb = 2560).

Any more thoughts ?

Thanks,
Manu


On Thu, Oct 24, 2013 at 8:59 AM, Manu Zhang <ow...@gmail.com> wrote:

> Thanks Ravi.
>
> I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
> sounds weird to me should they read configuration from those
> mapred-site.xml since it's the client who applies for the resource. I have
> another mapred-site.xml in the directory where I run my job. I suppose my
> job should read conf from that mapred-site.xml. Please correct me if I am
> mistaken.
>
> Also, not always the same nodes. The number of failures is random, too.
>
> Anyway, I will have my settings in all the nodes' mapred-site.xml and see
> if the problem goes away.
>
> Manu
>
>
> On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <ra...@ymail.com> wrote:
>
>> Manu!
>>
>> This should not be the case. All tasks should have the configuration
>> values you specified propagated to them. Are you sure your setup is
>> correct? Are they always the same nodes which run with 1024Mb? Perhaps you
>> have mapred-site.xml on those nodes?
>>
>> HTH
>> Ravi
>>
>>
>>   On Tuesday, October 22, 2013 9:09 PM, Manu Zhang <
>> owenzhang1990@gmail.com> wrote:
>>  Hi,
>>
>> I've been running Terasort on Hadoop-2.0.4.
>>
>> Every time there is s a small number of Map failures (like 4 or 5)
>> because of container's running beyond virtual memory limit.
>>
>> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
>> TaskAttempt goes fine while the values of those failed maps are the default
>> 1024MB.
>>
>> My question is thus, why a small number of container's memory values are
>> set to default rather than that of user-configured ?
>>
>> Any thoughts ?
>>
>> Thanks,
>> Manu Zhang
>>
>>
>>
>>
>

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Manu Zhang <ow...@gmail.com>.
Thanks Ravi.

I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
sounds weird to me should they read configuration from those
mapred-site.xml since it's the client who applies for the resource. I have
another mapred-site.xml in the directory where I run my job. I suppose my
job should read conf from that mapred-site.xml. Please correct me if I am
mistaken.

Also, not always the same nodes. The number of failures is random, too.

Anyway, I will have my settings in all the nodes' mapred-site.xml and see
if the problem goes away.

Manu


On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <ra...@ymail.com> wrote:

> Manu!
>
> This should not be the case. All tasks should have the configuration
> values you specified propagated to them. Are you sure your setup is
> correct? Are they always the same nodes which run with 1024Mb? Perhaps you
> have mapred-site.xml on those nodes?
>
> HTH
> Ravi
>
>
>   On Tuesday, October 22, 2013 9:09 PM, Manu Zhang <
> owenzhang1990@gmail.com> wrote:
>  Hi,
>
> I've been running Terasort on Hadoop-2.0.4.
>
> Every time there is s a small number of Map failures (like 4 or 5) because
> of container's running beyond virtual memory limit.
>
> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
> TaskAttempt goes fine while the values of those failed maps are the default
> 1024MB.
>
> My question is thus, why a small number of container's memory values are
> set to default rather than that of user-configured ?
>
> Any thoughts ?
>
> Thanks,
> Manu Zhang
>
>
>
>

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Manu Zhang <ow...@gmail.com>.
Thanks Ravi.

I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
sounds weird to me should they read configuration from those
mapred-site.xml since it's the client who applies for the resource. I have
another mapred-site.xml in the directory where I run my job. I suppose my
job should read conf from that mapred-site.xml. Please correct me if I am
mistaken.

Also, not always the same nodes. The number of failures is random, too.

Anyway, I will have my settings in all the nodes' mapred-site.xml and see
if the problem goes away.

Manu


On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <ra...@ymail.com> wrote:

> Manu!
>
> This should not be the case. All tasks should have the configuration
> values you specified propagated to them. Are you sure your setup is
> correct? Are they always the same nodes which run with 1024Mb? Perhaps you
> have mapred-site.xml on those nodes?
>
> HTH
> Ravi
>
>
>   On Tuesday, October 22, 2013 9:09 PM, Manu Zhang <
> owenzhang1990@gmail.com> wrote:
>  Hi,
>
> I've been running Terasort on Hadoop-2.0.4.
>
> Every time there is s a small number of Map failures (like 4 or 5) because
> of container's running beyond virtual memory limit.
>
> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
> TaskAttempt goes fine while the values of those failed maps are the default
> 1024MB.
>
> My question is thus, why a small number of container's memory values are
> set to default rather than that of user-configured ?
>
> Any thoughts ?
>
> Thanks,
> Manu Zhang
>
>
>
>

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Manu Zhang <ow...@gmail.com>.
Thanks Ravi.

I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
sounds weird to me should they read configuration from those
mapred-site.xml since it's the client who applies for the resource. I have
another mapred-site.xml in the directory where I run my job. I suppose my
job should read conf from that mapred-site.xml. Please correct me if I am
mistaken.

Also, not always the same nodes. The number of failures is random, too.

Anyway, I will have my settings in all the nodes' mapred-site.xml and see
if the problem goes away.

Manu


On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <ra...@ymail.com> wrote:

> Manu!
>
> This should not be the case. All tasks should have the configuration
> values you specified propagated to them. Are you sure your setup is
> correct? Are they always the same nodes which run with 1024Mb? Perhaps you
> have mapred-site.xml on those nodes?
>
> HTH
> Ravi
>
>
>   On Tuesday, October 22, 2013 9:09 PM, Manu Zhang <
> owenzhang1990@gmail.com> wrote:
>  Hi,
>
> I've been running Terasort on Hadoop-2.0.4.
>
> Every time there is s a small number of Map failures (like 4 or 5) because
> of container's running beyond virtual memory limit.
>
> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
> TaskAttempt goes fine while the values of those failed maps are the default
> 1024MB.
>
> My question is thus, why a small number of container's memory values are
> set to default rather than that of user-configured ?
>
> Any thoughts ?
>
> Thanks,
> Manu Zhang
>
>
>
>

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Manu Zhang <ow...@gmail.com>.
Thanks Ravi.

I do have mapred-site.xml under /etc/hadoop/conf/ on those nodes but it
sounds weird to me should they read configuration from those
mapred-site.xml since it's the client who applies for the resource. I have
another mapred-site.xml in the directory where I run my job. I suppose my
job should read conf from that mapred-site.xml. Please correct me if I am
mistaken.

Also, not always the same nodes. The number of failures is random, too.

Anyway, I will have my settings in all the nodes' mapred-site.xml and see
if the problem goes away.

Manu


On Thu, Oct 24, 2013 at 1:40 AM, Ravi Prakash <ra...@ymail.com> wrote:

> Manu!
>
> This should not be the case. All tasks should have the configuration
> values you specified propagated to them. Are you sure your setup is
> correct? Are they always the same nodes which run with 1024Mb? Perhaps you
> have mapred-site.xml on those nodes?
>
> HTH
> Ravi
>
>
>   On Tuesday, October 22, 2013 9:09 PM, Manu Zhang <
> owenzhang1990@gmail.com> wrote:
>  Hi,
>
> I've been running Terasort on Hadoop-2.0.4.
>
> Every time there is s a small number of Map failures (like 4 or 5) because
> of container's running beyond virtual memory limit.
>
> I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
> TaskAttempt goes fine while the values of those failed maps are the default
> 1024MB.
>
> My question is thus, why a small number of container's memory values are
> set to default rather than that of user-configured ?
>
> Any thoughts ?
>
> Thanks,
> Manu Zhang
>
>
>
>

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Ravi Prakash <ra...@ymail.com>.
Manu!

This should not be the case. All tasks should have the configuration values you specified propagated to them. Are you sure your setup is correct? Are they always the same nodes which run with 1024Mb? Perhaps you have mapred-site.xml on those nodes?

HTH
Ravi




On Tuesday, October 22, 2013 9:09 PM, Manu Zhang <ow...@gmail.com> wrote:
 
Hi, 
I've been running Terasort on Hadoop-2.0.4.

Every time there is s a small number of Map failures (like 4 or 5) because of container's running beyond virtual memory limit. 

I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most TaskAttempt goes fine while the values of those failed maps are the default 1024MB.

My question is thus, why a small number of container's memory values are set to default rather than that of user-configured ?

Any thoughts ?

Thanks,
Manu Zhang

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Ravi Prakash <ra...@ymail.com>.
Manu!

This should not be the case. All tasks should have the configuration values you specified propagated to them. Are you sure your setup is correct? Are they always the same nodes which run with 1024Mb? Perhaps you have mapred-site.xml on those nodes?

HTH
Ravi




On Tuesday, October 22, 2013 9:09 PM, Manu Zhang <ow...@gmail.com> wrote:
 
Hi, 
I've been running Terasort on Hadoop-2.0.4.

Every time there is s a small number of Map failures (like 4 or 5) because of container's running beyond virtual memory limit. 

I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most TaskAttempt goes fine while the values of those failed maps are the default 1024MB.

My question is thus, why a small number of container's memory values are set to default rather than that of user-configured ?

Any thoughts ?

Thanks,
Manu Zhang

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Ravi Prakash <ra...@ymail.com>.
Manu!

This should not be the case. All tasks should have the configuration values you specified propagated to them. Are you sure your setup is correct? Are they always the same nodes which run with 1024Mb? Perhaps you have mapred-site.xml on those nodes?

HTH
Ravi




On Tuesday, October 22, 2013 9:09 PM, Manu Zhang <ow...@gmail.com> wrote:
 
Hi, 
I've been running Terasort on Hadoop-2.0.4.

Every time there is s a small number of Map failures (like 4 or 5) because of container's running beyond virtual memory limit. 

I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most TaskAttempt goes fine while the values of those failed maps are the default 1024MB.

My question is thus, why a small number of container's memory values are set to default rather than that of user-configured ?

Any thoughts ?

Thanks,
Manu Zhang

Re: map container is assigned default memory size rather than user configured which will cause TaskAttempt failure

Posted by Ravi Prakash <ra...@ymail.com>.
Manu!

This should not be the case. All tasks should have the configuration values you specified propagated to them. Are you sure your setup is correct? Are they always the same nodes which run with 1024Mb? Perhaps you have mapred-site.xml on those nodes?

HTH
Ravi




On Tuesday, October 22, 2013 9:09 PM, Manu Zhang <ow...@gmail.com> wrote:
 
Hi, 
I've been running Terasort on Hadoop-2.0.4.

Every time there is s a small number of Map failures (like 4 or 5) because of container's running beyond virtual memory limit. 

I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most TaskAttempt goes fine while the values of those failed maps are the default 1024MB.

My question is thus, why a small number of container's memory values are set to default rather than that of user-configured ?

Any thoughts ?

Thanks,
Manu Zhang