You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@apex.apache.org by "Raja.Aravapalli" <Ra...@target.com> on 2016/07/12 13:57:10 UTC

DAG is failing due to memory issues

Hi,

My DAG is failing with memory issues for container. Seeing below information in the log.



Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing container.


Can someone help me on how I can fix this issue. Thanks a lot.



Regards,
Raja.

Re: DAG is failing due to memory issues

Posted by Munagala Ramanath <ra...@datatorrent.com>.
It looks like the current allocation is the default of 1GB; please increase
it to, say, 4GB and see
if the problem is resolved. The max appears to be 32GB.

Also checkout the "Advanced Features" section of the Top N Words tutorial (
http://docs.datatorrent.com/tutorials/topnwords-c7/)
where memory allocation is discussed in considerable detail. There is also
a brief discussion
in the "Allocating Operator Memory" section of the  beginner's guide:
http://docs.datatorrent.com/beginner/

Ram

On Tue, Jul 12, 2016 at 8:36 AM, Raja.Aravapalli <Raja.Aravapalli@target.com
> wrote:

>
> Hi Ram,
>
> I see in the cluster yarn-site.xml, below two properties are configured
> with below settings..
>
> yarn.scheduler.minimum-allocation-mb ===> 1024
> yarn.scheduler.maximum-allocation-mb ===> 32768
>
>
> So with the above settings at cluster level, I can’t increase the memory
> allocated for my DAG ?  Is there is any other way, I can increase the
> memory ?
>
>
> Thanks a lot.
>
>
> Regards,
> Raja.
>
> From: Munagala Ramanath <ra...@datatorrent.com>
> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
> Date: Tuesday, July 12, 2016 at 9:31 AM
> To: "users@apex.apache.org" <us...@apex.apache.org>
> Subject: Re: DAG is failing due to memory issues
>
> Please see:
> http://docs.datatorrent.com/troubleshooting/#configuring-memory
>
> Ram
>
> On Tue, Jul 12, 2016 at 6:57 AM, Raja.Aravapalli <
> Raja.Aravapalli@target.com> wrote:
>
>>
>> Hi,
>>
>> My DAG is failing with memory issues for container. Seeing below
>> information in the log.
>>
>>
>>
>> Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is
>> running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>> physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing
>> container.
>>
>>
>> Can someone help me on how I can fix this issue. Thanks a lot.
>>
>>
>>
>> Regards,
>> Raja.
>>
>
>

Re: DAG is failing due to memory issues

Posted by "Raja.Aravapalli" <Ra...@target.com>.
Thanks Sandesh.

I was able to increase memory requirements for my DAG and it is running fine now.


Regards,
Raja.

From: Sandesh Hegde <sa...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 9:54 PM
To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

I don't know the relation between the DAG size and the AppMaster memory yet. Maybe others can fill in.
When the situation you mentioned happens, I just raise the memory of the AppMaster by few GBs.

On Tue, Jul 12, 2016 at 2:33 PM Raja.Aravapalli <Ra...@target.com>> wrote:

Sure Sandesh Thanks.

Also, one quick question,

When will the size/memory of the Application Master grows ?

Does the memory of AM depends on the no.of operators in the pipeline ?

One issue I observed with my DAG is,

Memory of the application master is growing for my DAG and after reaching max. memory allowed, it is killed/failed… and after trying max allowed attempts entire DAG is failing!!

Wish to know why the size of my AM is growing and control it, so that… Application master doesn’t fail and eventually entire DAG doesn’t fail!


Regards,
Raja.

From: Sandesh Hegde <sa...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 2:43 PM

To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

UI Memory = Total Memory - AppMaster Memory

DAG size can vary between different setups, that happens because the max size of the container is defined by the yarn parameter mentioned above.

Apex does the following:

if (csr.container.getRequiredMemoryMB() > maxMem) {
  LOG.warn("Container memory {}m above max threshold of cluster. Using max value {}m.", csr.container.getRequiredMemoryMB(), maxMem);
  csr.container.setRequiredMemoryMB(maxMem);
}

On Tue, Jul 12, 2016 at 10:21 AM Raja.Aravapalli <Ra...@target.com>> wrote:

Hi,


What memory does the “allocated mem.” refers to on UI for a DAG ? Application Master OR Containers memory of an operators ?


[X]


I included below properties as well and re-triggered the DAG, still it is showing 32GB only!!


<property>
    <name>dt.application.<APP_NAME>.attr.MASTER_MEMORY_MB</name>
    <value>4096</value>
</property>

<property>
    <name>dt.application.<APP_NAME>.operator.*.attr.MEMORY_MB</name>
    <value>4096</value>
</property>


I have the same DAG running on other hadoop environment, which is showing approx. 125gb, but in other environment only 32gb, which is what I am assuming to be the problem !!


Regards,
Raja.


From: Sandesh Hegde <sa...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 11:35 AM

To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

Raja,

Please increase the container size and launch the app again.  yarn.scheduler.maximum-allocation-mb is for the container and not for the DAG and the error message showed by you is for the container.

Here is one quick way, use the following attribute.

<property>
  <name>dt.operator.*.attr.MEMORY_MB</name>
  <value>4096</value>
</property>


On Tue, Jul 12, 2016 at 9:24 AM Raja.Aravapalli <Ra...@target.com>> wrote:

Hi Ram,

Sorry I did not share that details of 32gb with you.

I am saying 32gb is allocated because, I observed the same on UI, when the application is running. But now, as the DAG is failed, I cannot take a screenshot and send!!


Regards,
Raja.

From: Munagala Ramanath <ra...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 11:06 AM

To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

How do you know it is allocating 32GB ? The diagnostic message you posted does not show
that.

Ram

On Tue, Jul 12, 2016 at 8:51 AM, Raja.Aravapalli <Ra...@target.com>> wrote:

Thanks for the response Sandesh.

Since our yarn-site is configured with value 32768 for the property yarn.scheduler.maximum-allocation-mb, it is allocating a max of 32gb and not more than that!!


Wish to know, is there a way I can increase the max allowed value ? OR, since it is configured in yarn-site.xml, I cannot increase it ?



Regards,
Raja.

From: Sandesh Hegde <sa...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 10:46 AM
To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

Quoting from the doc shared by the Ram, those parameters control operator memory size.


 actual container memory allocated by RM has to lie between

[yarn.scheduler.minimum-allocation-mb, yarn.scheduler.maximum-allocation-mb]

On Tue, Jul 12, 2016 at 8:38 AM Raja.Aravapalli <Ra...@target.com>> wrote:

Hi Ram,

I see in the cluster yarn-site.xml, below two properties are configured with below settings..

yarn.scheduler.minimum-allocation-mb ===> 1024
yarn.scheduler.maximum-allocation-mb ===> 32768


So with the above settings at cluster level, I can’t increase the memory allocated for my DAG ?  Is there is any other way, I can increase the memory ?


Thanks a lot.


Regards,
Raja.

From: Munagala Ramanath <ra...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 9:31 AM
To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

Please see: http://docs.datatorrent.com/troubleshooting/#configuring-memory

Ram

On Tue, Jul 12, 2016 at 6:57 AM, Raja.Aravapalli <Ra...@target.com>> wrote:

Hi,

My DAG is failing with memory issues for container. Seeing below information in the log.



Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing container.


Can someone help me on how I can fix this issue. Thanks a lot.



Regards,
Raja.



Re: DAG is failing due to memory issues

Posted by Sandesh Hegde <sa...@datatorrent.com>.
I don't know the relation between the DAG size and the AppMaster memory
yet. Maybe others can fill in.
When the situation you mentioned happens, I just raise the memory of the
AppMaster by few GBs.

On Tue, Jul 12, 2016 at 2:33 PM Raja.Aravapalli <Ra...@target.com>
wrote:

>
> Sure Sandesh Thanks.
>
> Also, one quick question,
>
> When will the size/memory of the Application Master grows ?
>
> Does the memory of AM depends on the no.of operators in the pipeline ?
>
> One issue I observed with my DAG is,
>
> Memory of the application master is growing for my DAG and after reaching
> max. memory allowed, it is killed/failed… and after trying max allowed
> attempts entire DAG is failing!!
>
> Wish to know why the size of my AM is growing and control it, so that…
> Application master doesn’t fail and eventually entire DAG doesn’t fail!
>
>
> Regards,
> Raja.
>
> From: Sandesh Hegde <sa...@datatorrent.com>
> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
> Date: Tuesday, July 12, 2016 at 2:43 PM
>
> To: "users@apex.apache.org" <us...@apex.apache.org>
> Subject: Re: DAG is failing due to memory issues
>
> UI Memory = Total Memory - AppMaster Memory
>
> DAG size can vary between different setups, that happens because the max
> size of the container is defined by the yarn parameter mentioned above.
>
> Apex does the following:
>
> if (csr.container.getRequiredMemoryMB() > maxMem) {
>   LOG.warn("Container memory {}m above max threshold of cluster. Using max value {}m.", csr.container.getRequiredMemoryMB(), maxMem);
>   csr.container.setRequiredMemoryMB(maxMem);
> }
>
>
> On Tue, Jul 12, 2016 at 10:21 AM Raja.Aravapalli <
> Raja.Aravapalli@target.com> wrote:
>
>>
>> Hi,
>>
>>
>> What memory does the “allocated mem.” refers to on UI for a DAG ?
>> Application Master OR Containers memory of an operators ?
>>
>>
>>
>>
>> I included below properties as well and re-triggered the DAG, still it is
>> showing 32GB only!!
>>
>> <property>
>>     <name>dt.application.<APP_NAME>.attr.MASTER_MEMORY_MB</name>
>>     <value>4096</value>
>> </property>
>>
>> <property>
>>     <name>dt.application.<APP_NAME>.operator.*.attr.MEMORY_MB</name>
>>     <value>4096</value>
>> </property>
>>
>>
>>
>> I have the same DAG running on other hadoop environment, which is showing
>> approx. 125gb, but in other environment only 32gb, which is what I am
>> assuming to be the problem !!
>>
>>
>> Regards,
>> Raja.
>>
>>
>> From: Sandesh Hegde <sa...@datatorrent.com>
>> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
>> Date: Tuesday, July 12, 2016 at 11:35 AM
>>
>> To: "users@apex.apache.org" <us...@apex.apache.org>
>> Subject: Re: DAG is failing due to memory issues
>>
>> Raja,
>>
>> Please increase the container size and launch the app again.  yarn
>> .scheduler.maximum-allocation-mb is for the container and not for the
>> DAG and the error message showed by you is for the container.
>>
>> Here is one quick way, use the following attribute.
>>
>> <property>
>>   <name>dt.operator.*.attr.MEMORY_MB</name>
>>   <value>4096</value>
>> </property>
>>
>>
>>
>> On Tue, Jul 12, 2016 at 9:24 AM Raja.Aravapalli <
>> Raja.Aravapalli@target.com> wrote:
>>
>>>
>>> Hi Ram,
>>>
>>> Sorry I did not share that details of 32gb with you.
>>>
>>> I am saying 32gb is allocated because, I observed the same on UI, when
>>> the application is running. But now, as the DAG is failed, I cannot take a
>>> screenshot and send!!
>>>
>>>
>>> Regards,
>>> Raja.
>>>
>>> From: Munagala Ramanath <ra...@datatorrent.com>
>>> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
>>> Date: Tuesday, July 12, 2016 at 11:06 AM
>>>
>>> To: "users@apex.apache.org" <us...@apex.apache.org>
>>> Subject: Re: DAG is failing due to memory issues
>>>
>>> How do you know it is allocating 32GB ? The diagnostic message you
>>> posted does not show
>>> that.
>>>
>>> Ram
>>>
>>> On Tue, Jul 12, 2016 at 8:51 AM, Raja.Aravapalli <
>>> Raja.Aravapalli@target.com> wrote:
>>>
>>>>
>>>> Thanks for the response Sandesh.
>>>>
>>>> Since our yarn-site is configured with value *32768* for the property *
>>>> yarn.scheduler.maximum-allocation-mb*, it is allocating a max of *32gb*
>>>> and not more than that!!
>>>>
>>>>
>>>> Wish to know, is there a way I can increase the max allowed value ? OR,
>>>> since it is configured in yarn-site.xml, I *cannot* increase it ?
>>>>
>>>>
>>>>
>>>> Regards,
>>>> Raja.
>>>>
>>>> From: Sandesh Hegde <sa...@datatorrent.com>
>>>> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
>>>> Date: Tuesday, July 12, 2016 at 10:46 AM
>>>> To: "users@apex.apache.org" <us...@apex.apache.org>
>>>> Subject: Re: DAG is failing due to memory issues
>>>>
>>>> Quoting from the doc shared by the Ram, those parameters control
>>>> operator memory size.
>>>>
>>>>  actual container memory allocated by RM has to lie between
>>>>
>>>> [yarn.scheduler.minimum-allocation-mb, yarn.scheduler.maximum-allocation-mb]
>>>>
>>>>
>>>> On Tue, Jul 12, 2016 at 8:38 AM Raja.Aravapalli <
>>>> Raja.Aravapalli@target.com> wrote:
>>>>
>>>>>
>>>>> Hi Ram,
>>>>>
>>>>> I see in the cluster yarn-site.xml, below two properties are
>>>>> configured with below settings..
>>>>>
>>>>> yarn.scheduler.minimum-allocation-mb ===> 1024
>>>>> yarn.scheduler.maximum-allocation-mb ===> 32768
>>>>>
>>>>>
>>>>> So with the above settings at cluster level, I can’t increase the
>>>>> memory allocated for my DAG ?  Is there is any other way, I can increase
>>>>> the memory ?
>>>>>
>>>>>
>>>>> Thanks a lot.
>>>>>
>>>>>
>>>>> Regards,
>>>>> Raja.
>>>>>
>>>>> From: Munagala Ramanath <ra...@datatorrent.com>
>>>>> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
>>>>> Date: Tuesday, July 12, 2016 at 9:31 AM
>>>>> To: "users@apex.apache.org" <us...@apex.apache.org>
>>>>> Subject: Re: DAG is failing due to memory issues
>>>>>
>>>>> Please see:
>>>>> http://docs.datatorrent.com/troubleshooting/#configuring-memory
>>>>>
>>>>> Ram
>>>>>
>>>>> On Tue, Jul 12, 2016 at 6:57 AM, Raja.Aravapalli <
>>>>> Raja.Aravapalli@target.com> wrote:
>>>>>
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> My DAG is failing with memory issues for container. Seeing below
>>>>>> information in the log.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is
>>>>>> running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>>>>>> physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing
>>>>>> container.
>>>>>>
>>>>>>
>>>>>> Can someone help me on how I can fix this issue. Thanks a lot.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>> Raja.
>>>>>>
>>>>>
>>>>>
>>>

Re: DAG is failing due to memory issues

Posted by "Raja.Aravapalli" <Ra...@target.com>.
Sure Sandesh Thanks.

Also, one quick question,

When will the size/memory of the Application Master grows ?

Does the memory of AM depends on the no.of operators in the pipeline ?

One issue I observed with my DAG is,

Memory of the application master is growing for my DAG and after reaching max. memory allowed, it is killed/failed… and after trying max allowed attempts entire DAG is failing!!

Wish to know why the size of my AM is growing and control it, so that… Application master doesn’t fail and eventually entire DAG doesn’t fail!


Regards,
Raja.

From: Sandesh Hegde <sa...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 2:43 PM
To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

UI Memory = Total Memory - AppMaster Memory

DAG size can vary between different setups, that happens because the max size of the container is defined by the yarn parameter mentioned above.

Apex does the following:

if (csr.container.getRequiredMemoryMB() > maxMem) {
  LOG.warn("Container memory {}m above max threshold of cluster. Using max value {}m.", csr.container.getRequiredMemoryMB(), maxMem);
  csr.container.setRequiredMemoryMB(maxMem);
}

On Tue, Jul 12, 2016 at 10:21 AM Raja.Aravapalli <Ra...@target.com>> wrote:

Hi,


What memory does the “allocated mem.” refers to on UI for a DAG ? Application Master OR Containers memory of an operators ?


[cid:B61FE0C9-4767-4FF8-9E23-454CB502C53C]


I included below properties as well and re-triggered the DAG, still it is showing 32GB only!!


<property>
    <name>dt.application.<APP_NAME>.attr.MASTER_MEMORY_MB</name>
    <value>4096</value>
</property>

<property>
    <name>dt.application.<APP_NAME>.operator.*.attr.MEMORY_MB</name>
    <value>4096</value>
</property>


I have the same DAG running on other hadoop environment, which is showing approx. 125gb, but in other environment only 32gb, which is what I am assuming to be the problem !!


Regards,
Raja.


From: Sandesh Hegde <sa...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 11:35 AM

To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

Raja,

Please increase the container size and launch the app again.  yarn.scheduler.maximum-allocation-mb is for the container and not for the DAG and the error message showed by you is for the container.

Here is one quick way, use the following attribute.

<property>
  <name>dt.operator.*.attr.MEMORY_MB</name>
  <value>4096</value>
</property>


On Tue, Jul 12, 2016 at 9:24 AM Raja.Aravapalli <Ra...@target.com>> wrote:

Hi Ram,

Sorry I did not share that details of 32gb with you.

I am saying 32gb is allocated because, I observed the same on UI, when the application is running. But now, as the DAG is failed, I cannot take a screenshot and send!!


Regards,
Raja.

From: Munagala Ramanath <ra...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 11:06 AM

To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

How do you know it is allocating 32GB ? The diagnostic message you posted does not show
that.

Ram

On Tue, Jul 12, 2016 at 8:51 AM, Raja.Aravapalli <Ra...@target.com>> wrote:

Thanks for the response Sandesh.

Since our yarn-site is configured with value 32768 for the property yarn.scheduler.maximum-allocation-mb, it is allocating a max of 32gb and not more than that!!


Wish to know, is there a way I can increase the max allowed value ? OR, since it is configured in yarn-site.xml, I cannot increase it ?



Regards,
Raja.

From: Sandesh Hegde <sa...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 10:46 AM
To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

Quoting from the doc shared by the Ram, those parameters control operator memory size.


 actual container memory allocated by RM has to lie between

[yarn.scheduler.minimum-allocation-mb, yarn.scheduler.maximum-allocation-mb]

On Tue, Jul 12, 2016 at 8:38 AM Raja.Aravapalli <Ra...@target.com>> wrote:

Hi Ram,

I see in the cluster yarn-site.xml, below two properties are configured with below settings..

yarn.scheduler.minimum-allocation-mb ===> 1024
yarn.scheduler.maximum-allocation-mb ===> 32768


So with the above settings at cluster level, I can’t increase the memory allocated for my DAG ?  Is there is any other way, I can increase the memory ?


Thanks a lot.


Regards,
Raja.

From: Munagala Ramanath <ra...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 9:31 AM
To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

Please see: http://docs.datatorrent.com/troubleshooting/#configuring-memory

Ram

On Tue, Jul 12, 2016 at 6:57 AM, Raja.Aravapalli <Ra...@target.com>> wrote:

Hi,

My DAG is failing with memory issues for container. Seeing below information in the log.



Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing container.


Can someone help me on how I can fix this issue. Thanks a lot.



Regards,
Raja.



Re: DAG is failing due to memory issues

Posted by Sandesh Hegde <sa...@datatorrent.com>.
UI Memory = Total Memory - AppMaster Memory

DAG size can vary between different setups, that happens because the max
size of the container is defined by the yarn parameter mentioned above.

Apex does the following:

if (csr.container.getRequiredMemoryMB() > maxMem) {
  LOG.warn("Container memory {}m above max threshold of cluster. Using
max value {}m.", csr.container.getRequiredMemoryMB(), maxMem);
  csr.container.setRequiredMemoryMB(maxMem);
}


On Tue, Jul 12, 2016 at 10:21 AM Raja.Aravapalli <Ra...@target.com>
wrote:

>
> Hi,
>
>
> What memory does the “allocated mem.” refers to on UI for a DAG ?
> Application Master OR Containers memory of an operators ?
>
>
>
>
> I included below properties as well and re-triggered the DAG, still it is
> showing 32GB only!!
>
> <property>
>     <name>dt.application.<APP_NAME>.attr.MASTER_MEMORY_MB</name>
>     <value>4096</value>
> </property>
>
> <property>
>     <name>dt.application.<APP_NAME>.operator.*.attr.MEMORY_MB</name>
>     <value>4096</value>
> </property>
>
>
>
> I have the same DAG running on other hadoop environment, which is showing
> approx. 125gb, but in other environment only 32gb, which is what I am
> assuming to be the problem !!
>
>
> Regards,
> Raja.
>
>
> From: Sandesh Hegde <sa...@datatorrent.com>
> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
> Date: Tuesday, July 12, 2016 at 11:35 AM
>
> To: "users@apex.apache.org" <us...@apex.apache.org>
> Subject: Re: DAG is failing due to memory issues
>
> Raja,
>
> Please increase the container size and launch the app again.  yarn
> .scheduler.maximum-allocation-mb is for the container and not for the DAG
> and the error message showed by you is for the container.
>
> Here is one quick way, use the following attribute.
>
> <property>
>   <name>dt.operator.*.attr.MEMORY_MB</name>
>   <value>4096</value>
> </property>
>
>
>
> On Tue, Jul 12, 2016 at 9:24 AM Raja.Aravapalli <
> Raja.Aravapalli@target.com> wrote:
>
>>
>> Hi Ram,
>>
>> Sorry I did not share that details of 32gb with you.
>>
>> I am saying 32gb is allocated because, I observed the same on UI, when
>> the application is running. But now, as the DAG is failed, I cannot take a
>> screenshot and send!!
>>
>>
>> Regards,
>> Raja.
>>
>> From: Munagala Ramanath <ra...@datatorrent.com>
>> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
>> Date: Tuesday, July 12, 2016 at 11:06 AM
>>
>> To: "users@apex.apache.org" <us...@apex.apache.org>
>> Subject: Re: DAG is failing due to memory issues
>>
>> How do you know it is allocating 32GB ? The diagnostic message you posted
>> does not show
>> that.
>>
>> Ram
>>
>> On Tue, Jul 12, 2016 at 8:51 AM, Raja.Aravapalli <
>> Raja.Aravapalli@target.com> wrote:
>>
>>>
>>> Thanks for the response Sandesh.
>>>
>>> Since our yarn-site is configured with value *32768* for the property *
>>> yarn.scheduler.maximum-allocation-mb*, it is allocating a max of *32gb*
>>> and not more than that!!
>>>
>>>
>>> Wish to know, is there a way I can increase the max allowed value ? OR,
>>> since it is configured in yarn-site.xml, I *cannot* increase it ?
>>>
>>>
>>>
>>> Regards,
>>> Raja.
>>>
>>> From: Sandesh Hegde <sa...@datatorrent.com>
>>> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
>>> Date: Tuesday, July 12, 2016 at 10:46 AM
>>> To: "users@apex.apache.org" <us...@apex.apache.org>
>>> Subject: Re: DAG is failing due to memory issues
>>>
>>> Quoting from the doc shared by the Ram, those parameters control
>>> operator memory size.
>>>
>>>  actual container memory allocated by RM has to lie between
>>>
>>> [yarn.scheduler.minimum-allocation-mb, yarn.scheduler.maximum-allocation-mb]
>>>
>>>
>>> On Tue, Jul 12, 2016 at 8:38 AM Raja.Aravapalli <
>>> Raja.Aravapalli@target.com> wrote:
>>>
>>>>
>>>> Hi Ram,
>>>>
>>>> I see in the cluster yarn-site.xml, below two properties are configured
>>>> with below settings..
>>>>
>>>> yarn.scheduler.minimum-allocation-mb ===> 1024
>>>> yarn.scheduler.maximum-allocation-mb ===> 32768
>>>>
>>>>
>>>> So with the above settings at cluster level, I can’t increase the
>>>> memory allocated for my DAG ?  Is there is any other way, I can increase
>>>> the memory ?
>>>>
>>>>
>>>> Thanks a lot.
>>>>
>>>>
>>>> Regards,
>>>> Raja.
>>>>
>>>> From: Munagala Ramanath <ra...@datatorrent.com>
>>>> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
>>>> Date: Tuesday, July 12, 2016 at 9:31 AM
>>>> To: "users@apex.apache.org" <us...@apex.apache.org>
>>>> Subject: Re: DAG is failing due to memory issues
>>>>
>>>> Please see:
>>>> http://docs.datatorrent.com/troubleshooting/#configuring-memory
>>>>
>>>> Ram
>>>>
>>>> On Tue, Jul 12, 2016 at 6:57 AM, Raja.Aravapalli <
>>>> Raja.Aravapalli@target.com> wrote:
>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>> My DAG is failing with memory issues for container. Seeing below
>>>>> information in the log.
>>>>>
>>>>>
>>>>>
>>>>> Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is
>>>>> running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>>>>> physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing
>>>>> container.
>>>>>
>>>>>
>>>>> Can someone help me on how I can fix this issue. Thanks a lot.
>>>>>
>>>>>
>>>>>
>>>>> Regards,
>>>>> Raja.
>>>>>
>>>>
>>>>
>>

Re: DAG is failing due to memory issues

Posted by "Raja.Aravapalli" <Ra...@target.com>.
Hi,


What memory does the “allocated mem.” refers to on UI for a DAG ? Application Master OR Containers memory of an operators ?


[cid:B61FE0C9-4767-4FF8-9E23-454CB502C53C]


I included below properties as well and re-triggered the DAG, still it is showing 32GB only!!


<property>
    <name>dt.application.<APP_NAME>.attr.MASTER_MEMORY_MB</name>
    <value>4096</value>
</property>

<property>
    <name>dt.application.<APP_NAME>.operator.*.attr.MEMORY_MB</name>
    <value>4096</value>
</property>


I have the same DAG running on other hadoop environment, which is showing approx. 125gb, but in other environment only 32gb, which is what I am assuming to be the problem !!


Regards,
Raja.


From: Sandesh Hegde <sa...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 11:35 AM
To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

Raja,

Please increase the container size and launch the app again.  yarn.scheduler.maximum-allocation-mb is for the container and not for the DAG and the error message showed by you is for the container.

Here is one quick way, use the following attribute.

<property>
  <name>dt.operator.*.attr.MEMORY_MB</name>
  <value>4096</value>
</property>


On Tue, Jul 12, 2016 at 9:24 AM Raja.Aravapalli <Ra...@target.com>> wrote:

Hi Ram,

Sorry I did not share that details of 32gb with you.

I am saying 32gb is allocated because, I observed the same on UI, when the application is running. But now, as the DAG is failed, I cannot take a screenshot and send!!


Regards,
Raja.

From: Munagala Ramanath <ra...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 11:06 AM

To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

How do you know it is allocating 32GB ? The diagnostic message you posted does not show
that.

Ram

On Tue, Jul 12, 2016 at 8:51 AM, Raja.Aravapalli <Ra...@target.com>> wrote:

Thanks for the response Sandesh.

Since our yarn-site is configured with value 32768 for the property yarn.scheduler.maximum-allocation-mb, it is allocating a max of 32gb and not more than that!!


Wish to know, is there a way I can increase the max allowed value ? OR, since it is configured in yarn-site.xml, I cannot increase it ?



Regards,
Raja.

From: Sandesh Hegde <sa...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 10:46 AM
To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

Quoting from the doc shared by the Ram, those parameters control operator memory size.


 actual container memory allocated by RM has to lie between

[yarn.scheduler.minimum-allocation-mb, yarn.scheduler.maximum-allocation-mb]

On Tue, Jul 12, 2016 at 8:38 AM Raja.Aravapalli <Ra...@target.com>> wrote:

Hi Ram,

I see in the cluster yarn-site.xml, below two properties are configured with below settings..

yarn.scheduler.minimum-allocation-mb ===> 1024
yarn.scheduler.maximum-allocation-mb ===> 32768


So with the above settings at cluster level, I can’t increase the memory allocated for my DAG ?  Is there is any other way, I can increase the memory ?


Thanks a lot.


Regards,
Raja.

From: Munagala Ramanath <ra...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 9:31 AM
To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

Please see: http://docs.datatorrent.com/troubleshooting/#configuring-memory

Ram

On Tue, Jul 12, 2016 at 6:57 AM, Raja.Aravapalli <Ra...@target.com>> wrote:

Hi,

My DAG is failing with memory issues for container. Seeing below information in the log.



Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing container.


Can someone help me on how I can fix this issue. Thanks a lot.



Regards,
Raja.



Re: DAG is failing due to memory issues

Posted by Sandesh Hegde <sa...@datatorrent.com>.
Raja,

Please increase the container size and launch the app again.  yarn.scheduler
.maximum-allocation-mb is for the container and not for the DAG and the
error message showed by you is for the container.

Here is one quick way, use the following attribute.

<property>
  <name>dt.operator.*.attr.MEMORY_MB</name>
  <value>4096</value>
</property>



On Tue, Jul 12, 2016 at 9:24 AM Raja.Aravapalli <Ra...@target.com>
wrote:

>
> Hi Ram,
>
> Sorry I did not share that details of 32gb with you.
>
> I am saying 32gb is allocated because, I observed the same on UI, when the
> application is running. But now, as the DAG is failed, I cannot take a
> screenshot and send!!
>
>
> Regards,
> Raja.
>
> From: Munagala Ramanath <ra...@datatorrent.com>
> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
> Date: Tuesday, July 12, 2016 at 11:06 AM
>
> To: "users@apex.apache.org" <us...@apex.apache.org>
> Subject: Re: DAG is failing due to memory issues
>
> How do you know it is allocating 32GB ? The diagnostic message you posted
> does not show
> that.
>
> Ram
>
> On Tue, Jul 12, 2016 at 8:51 AM, Raja.Aravapalli <
> Raja.Aravapalli@target.com> wrote:
>
>>
>> Thanks for the response Sandesh.
>>
>> Since our yarn-site is configured with value *32768* for the property *
>> yarn.scheduler.maximum-allocation-mb*, it is allocating a max of *32gb*
>> and not more than that!!
>>
>>
>> Wish to know, is there a way I can increase the max allowed value ? OR,
>> since it is configured in yarn-site.xml, I *cannot* increase it ?
>>
>>
>>
>> Regards,
>> Raja.
>>
>> From: Sandesh Hegde <sa...@datatorrent.com>
>> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
>> Date: Tuesday, July 12, 2016 at 10:46 AM
>> To: "users@apex.apache.org" <us...@apex.apache.org>
>> Subject: Re: DAG is failing due to memory issues
>>
>> Quoting from the doc shared by the Ram, those parameters control operator
>> memory size.
>>
>>  actual container memory allocated by RM has to lie between
>>
>> [yarn.scheduler.minimum-allocation-mb, yarn.scheduler.maximum-allocation-mb]
>>
>>
>> On Tue, Jul 12, 2016 at 8:38 AM Raja.Aravapalli <
>> Raja.Aravapalli@target.com> wrote:
>>
>>>
>>> Hi Ram,
>>>
>>> I see in the cluster yarn-site.xml, below two properties are configured
>>> with below settings..
>>>
>>> yarn.scheduler.minimum-allocation-mb ===> 1024
>>> yarn.scheduler.maximum-allocation-mb ===> 32768
>>>
>>>
>>> So with the above settings at cluster level, I can’t increase the memory
>>> allocated for my DAG ?  Is there is any other way, I can increase the
>>> memory ?
>>>
>>>
>>> Thanks a lot.
>>>
>>>
>>> Regards,
>>> Raja.
>>>
>>> From: Munagala Ramanath <ra...@datatorrent.com>
>>> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
>>> Date: Tuesday, July 12, 2016 at 9:31 AM
>>> To: "users@apex.apache.org" <us...@apex.apache.org>
>>> Subject: Re: DAG is failing due to memory issues
>>>
>>> Please see:
>>> http://docs.datatorrent.com/troubleshooting/#configuring-memory
>>>
>>> Ram
>>>
>>> On Tue, Jul 12, 2016 at 6:57 AM, Raja.Aravapalli <
>>> Raja.Aravapalli@target.com> wrote:
>>>
>>>>
>>>> Hi,
>>>>
>>>> My DAG is failing with memory issues for container. Seeing below
>>>> information in the log.
>>>>
>>>>
>>>>
>>>> Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is
>>>> running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>>>> physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing
>>>> container.
>>>>
>>>>
>>>> Can someone help me on how I can fix this issue. Thanks a lot.
>>>>
>>>>
>>>>
>>>> Regards,
>>>> Raja.
>>>>
>>>
>>>
>

Re: DAG is failing due to memory issues

Posted by "Raja.Aravapalli" <Ra...@target.com>.
Hi Ram,

Sorry I did not share that details of 32gb with you.

I am saying 32gb is allocated because, I observed the same on UI, when the application is running. But now, as the DAG is failed, I cannot take a screenshot and send!!


Regards,
Raja.

From: Munagala Ramanath <ra...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 11:06 AM
To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

How do you know it is allocating 32GB ? The diagnostic message you posted does not show
that.

Ram

On Tue, Jul 12, 2016 at 8:51 AM, Raja.Aravapalli <Ra...@target.com>> wrote:

Thanks for the response Sandesh.

Since our yarn-site is configured with value 32768 for the property yarn.scheduler.maximum-allocation-mb, it is allocating a max of 32gb and not more than that!!


Wish to know, is there a way I can increase the max allowed value ? OR, since it is configured in yarn-site.xml, I cannot increase it ?



Regards,
Raja.

From: Sandesh Hegde <sa...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 10:46 AM
To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

Quoting from the doc shared by the Ram, those parameters control operator memory size.


 actual container memory allocated by RM has to lie between

[yarn.scheduler.minimum-allocation-mb, yarn.scheduler.maximum-allocation-mb]

On Tue, Jul 12, 2016 at 8:38 AM Raja.Aravapalli <Ra...@target.com>> wrote:

Hi Ram,

I see in the cluster yarn-site.xml, below two properties are configured with below settings..

yarn.scheduler.minimum-allocation-mb ===> 1024
yarn.scheduler.maximum-allocation-mb ===> 32768


So with the above settings at cluster level, I can’t increase the memory allocated for my DAG ?  Is there is any other way, I can increase the memory ?


Thanks a lot.


Regards,
Raja.

From: Munagala Ramanath <ra...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 9:31 AM
To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

Please see: http://docs.datatorrent.com/troubleshooting/#configuring-memory

Ram

On Tue, Jul 12, 2016 at 6:57 AM, Raja.Aravapalli <Ra...@target.com>> wrote:

Hi,

My DAG is failing with memory issues for container. Seeing below information in the log.



Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing container.


Can someone help me on how I can fix this issue. Thanks a lot.



Regards,
Raja.



Re: DAG is failing due to memory issues

Posted by Munagala Ramanath <ra...@datatorrent.com>.
How do you know it is allocating 32GB ? The diagnostic message you posted
does not show
that.

Ram

On Tue, Jul 12, 2016 at 8:51 AM, Raja.Aravapalli <Raja.Aravapalli@target.com
> wrote:

>
> Thanks for the response Sandesh.
>
> Since our yarn-site is configured with value *32768* for the property *
> yarn.scheduler.maximum-allocation-mb*, it is allocating a max of *32gb*
> and not more than that!!
>
>
> Wish to know, is there a way I can increase the max allowed value ? OR,
> since it is configured in yarn-site.xml, I *cannot* increase it ?
>
>
>
> Regards,
> Raja.
>
> From: Sandesh Hegde <sa...@datatorrent.com>
> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
> Date: Tuesday, July 12, 2016 at 10:46 AM
> To: "users@apex.apache.org" <us...@apex.apache.org>
> Subject: Re: DAG is failing due to memory issues
>
> Quoting from the doc shared by the Ram, those parameters control operator
> memory size.
>
>  actual container memory allocated by RM has to lie between
>
> [yarn.scheduler.minimum-allocation-mb, yarn.scheduler.maximum-allocation-mb]
>
>
> On Tue, Jul 12, 2016 at 8:38 AM Raja.Aravapalli <
> Raja.Aravapalli@target.com> wrote:
>
>>
>> Hi Ram,
>>
>> I see in the cluster yarn-site.xml, below two properties are configured
>> with below settings..
>>
>> yarn.scheduler.minimum-allocation-mb ===> 1024
>> yarn.scheduler.maximum-allocation-mb ===> 32768
>>
>>
>> So with the above settings at cluster level, I can’t increase the memory
>> allocated for my DAG ?  Is there is any other way, I can increase the
>> memory ?
>>
>>
>> Thanks a lot.
>>
>>
>> Regards,
>> Raja.
>>
>> From: Munagala Ramanath <ra...@datatorrent.com>
>> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
>> Date: Tuesday, July 12, 2016 at 9:31 AM
>> To: "users@apex.apache.org" <us...@apex.apache.org>
>> Subject: Re: DAG is failing due to memory issues
>>
>> Please see:
>> http://docs.datatorrent.com/troubleshooting/#configuring-memory
>>
>> Ram
>>
>> On Tue, Jul 12, 2016 at 6:57 AM, Raja.Aravapalli <
>> Raja.Aravapalli@target.com> wrote:
>>
>>>
>>> Hi,
>>>
>>> My DAG is failing with memory issues for container. Seeing below
>>> information in the log.
>>>
>>>
>>>
>>> Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is
>>> running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>>> physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing
>>> container.
>>>
>>>
>>> Can someone help me on how I can fix this issue. Thanks a lot.
>>>
>>>
>>>
>>> Regards,
>>> Raja.
>>>
>>
>>

Re: DAG is failing due to memory issues

Posted by Devendra Tagare <de...@datatorrent.com>.
You can increase the maximum allocation MB  but it will require a resource
manager restart.

Thanks,
Dev

On Jul 12, 2016 9:01 AM, "Raja.Aravapalli" <Ra...@target.com>
wrote:

>
> Thanks for the response Sandesh.
>
> Since our yarn-site is configured with value *32768* for the property *
> yarn.scheduler.maximum-allocation-mb*, it is allocating a max of *32gb*
> and not more than that!!
>
>
> Wish to know, is there a way I can increase the max allowed value ? OR,
> since it is configured in yarn-site.xml, I *cannot* increase it ?
>
>
>
> Regards,
> Raja.
>
> From: Sandesh Hegde <sa...@datatorrent.com>
> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
> Date: Tuesday, July 12, 2016 at 10:46 AM
> To: "users@apex.apache.org" <us...@apex.apache.org>
> Subject: Re: DAG is failing due to memory issues
>
> Quoting from the doc shared by the Ram, those parameters control operator
> memory size.
>
>  actual container memory allocated by RM has to lie between
>
> [yarn.scheduler.minimum-allocation-mb, yarn.scheduler.maximum-allocation-mb]
>
>
> On Tue, Jul 12, 2016 at 8:38 AM Raja.Aravapalli <
> Raja.Aravapalli@target.com> wrote:
>
>>
>> Hi Ram,
>>
>> I see in the cluster yarn-site.xml, below two properties are configured
>> with below settings..
>>
>> yarn.scheduler.minimum-allocation-mb ===> 1024
>> yarn.scheduler.maximum-allocation-mb ===> 32768
>>
>>
>> So with the above settings at cluster level, I can’t increase the memory
>> allocated for my DAG ?  Is there is any other way, I can increase the
>> memory ?
>>
>>
>> Thanks a lot.
>>
>>
>> Regards,
>> Raja.
>>
>> From: Munagala Ramanath <ra...@datatorrent.com>
>> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
>> Date: Tuesday, July 12, 2016 at 9:31 AM
>> To: "users@apex.apache.org" <us...@apex.apache.org>
>> Subject: Re: DAG is failing due to memory issues
>>
>> Please see:
>> http://docs.datatorrent.com/troubleshooting/#configuring-memory
>>
>> Ram
>>
>> On Tue, Jul 12, 2016 at 6:57 AM, Raja.Aravapalli <
>> Raja.Aravapalli@target.com> wrote:
>>
>>>
>>> Hi,
>>>
>>> My DAG is failing with memory issues for container. Seeing below
>>> information in the log.
>>>
>>>
>>>
>>> Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is
>>> running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>>> physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing
>>> container.
>>>
>>>
>>> Can someone help me on how I can fix this issue. Thanks a lot.
>>>
>>>
>>>
>>> Regards,
>>> Raja.
>>>
>>
>>

Re: DAG is failing due to memory issues

Posted by "Raja.Aravapalli" <Ra...@target.com>.
Thanks for the response Sandesh.

Since our yarn-site is configured with value 32768 for the property yarn.scheduler.maximum-allocation-mb, it is allocating a max of 32gb and not more than that!!


Wish to know, is there a way I can increase the max allowed value ? OR, since it is configured in yarn-site.xml, I cannot increase it ?



Regards,
Raja.

From: Sandesh Hegde <sa...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 10:46 AM
To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

Quoting from the doc shared by the Ram, those parameters control operator memory size.


 actual container memory allocated by RM has to lie between

[yarn.scheduler.minimum-allocation-mb, yarn.scheduler.maximum-allocation-mb]

On Tue, Jul 12, 2016 at 8:38 AM Raja.Aravapalli <Ra...@target.com>> wrote:

Hi Ram,

I see in the cluster yarn-site.xml, below two properties are configured with below settings..

yarn.scheduler.minimum-allocation-mb ===> 1024
yarn.scheduler.maximum-allocation-mb ===> 32768


So with the above settings at cluster level, I can’t increase the memory allocated for my DAG ?  Is there is any other way, I can increase the memory ?


Thanks a lot.


Regards,
Raja.

From: Munagala Ramanath <ra...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 9:31 AM
To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

Please see: http://docs.datatorrent.com/troubleshooting/#configuring-memory

Ram

On Tue, Jul 12, 2016 at 6:57 AM, Raja.Aravapalli <Ra...@target.com>> wrote:

Hi,

My DAG is failing with memory issues for container. Seeing below information in the log.



Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing container.


Can someone help me on how I can fix this issue. Thanks a lot.



Regards,
Raja.


Re: DAG is failing due to memory issues

Posted by Sandesh Hegde <sa...@datatorrent.com>.
Quoting from the doc shared by the Ram, those parameters control operator
memory size.

 actual container memory allocated by RM has to lie between

[yarn.scheduler.minimum-allocation-mb, yarn.scheduler.maximum-allocation-mb]


On Tue, Jul 12, 2016 at 8:38 AM Raja.Aravapalli <Ra...@target.com>
wrote:

>
> Hi Ram,
>
> I see in the cluster yarn-site.xml, below two properties are configured
> with below settings..
>
> yarn.scheduler.minimum-allocation-mb ===> 1024
> yarn.scheduler.maximum-allocation-mb ===> 32768
>
>
> So with the above settings at cluster level, I can’t increase the memory
> allocated for my DAG ?  Is there is any other way, I can increase the
> memory ?
>
>
> Thanks a lot.
>
>
> Regards,
> Raja.
>
> From: Munagala Ramanath <ra...@datatorrent.com>
> Reply-To: "users@apex.apache.org" <us...@apex.apache.org>
> Date: Tuesday, July 12, 2016 at 9:31 AM
> To: "users@apex.apache.org" <us...@apex.apache.org>
> Subject: Re: DAG is failing due to memory issues
>
> Please see:
> http://docs.datatorrent.com/troubleshooting/#configuring-memory
>
> Ram
>
> On Tue, Jul 12, 2016 at 6:57 AM, Raja.Aravapalli <
> Raja.Aravapalli@target.com> wrote:
>
>>
>> Hi,
>>
>> My DAG is failing with memory issues for container. Seeing below
>> information in the log.
>>
>>
>>
>> Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is
>> running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>> physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing
>> container.
>>
>>
>> Can someone help me on how I can fix this issue. Thanks a lot.
>>
>>
>>
>> Regards,
>> Raja.
>>
>
>

Re: DAG is failing due to memory issues

Posted by "Raja.Aravapalli" <Ra...@target.com>.
Hi Ram,

I see in the cluster yarn-site.xml, below two properties are configured with below settings..

yarn.scheduler.minimum-allocation-mb ===> 1024
yarn.scheduler.maximum-allocation-mb ===> 32768


So with the above settings at cluster level, I can’t increase the memory allocated for my DAG ?  Is there is any other way, I can increase the memory ?


Thanks a lot.


Regards,
Raja.

From: Munagala Ramanath <ra...@datatorrent.com>>
Reply-To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Date: Tuesday, July 12, 2016 at 9:31 AM
To: "users@apex.apache.org<ma...@apex.apache.org>" <us...@apex.apache.org>>
Subject: Re: DAG is failing due to memory issues

Please see: http://docs.datatorrent.com/troubleshooting/#configuring-memory

Ram

On Tue, Jul 12, 2016 at 6:57 AM, Raja.Aravapalli <Ra...@target.com>> wrote:

Hi,

My DAG is failing with memory issues for container. Seeing below information in the log.



Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing container.


Can someone help me on how I can fix this issue. Thanks a lot.



Regards,
Raja.


Re: DAG is failing due to memory issues

Posted by Munagala Ramanath <ra...@datatorrent.com>.
Please see: http://docs.datatorrent.com/troubleshooting/#configuring-memory

Ram

On Tue, Jul 12, 2016 at 6:57 AM, Raja.Aravapalli <Raja.Aravapalli@target.com
> wrote:

>
> Hi,
>
> My DAG is failing with memory issues for container. Seeing below
> information in the log.
>
>
>
> Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is
> running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
> physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing
> container.
>
>
> Can someone help me on how I can fix this issue. Thanks a lot.
>
>
>
> Regards,
> Raja.
>

Re: DAG is failing due to memory issues

Posted by Aniruddha Thombare <an...@datatorrent.com>.
Also from a previous thread on users:

---------- Forwarded message ----------
From: Shubham Pathak <sh...@datatorrent.com>
Date: Wed, Jun 15, 2016 at 1:57 AM
Subject: Re: Containers Not getting Allocated.
To: users@apex.apache.org

Hello,

This was a Hadoop configuration issue. yarn.nodemanager.resource.cpu-vcores
was set to 1. So total vCores available were 3 but the requirement was 4
and hence containers were not getting allocated. On increasing to 8, app
got the required resources.

Thanks,
Shubham
_____________________________________
Quote ends

Thanks,

A

_____________________________________
Sent with difficulty, I mean handheld ;)
On 12 Jul 2016 7:59 pm, "Aniruddha Thombare" <an...@datatorrent.com>
wrote:

> Hi,
>
> Can you check your YARN memory configuration?
>
> This may help you:
>
> http://www.cloudera.com/documentation/enterprise/5-5-x/topics/cdh_ig_yarn_tuning.html
>
> Thanks,
>
> A
>
> _____________________________________
> Sent with difficulty, I mean handheld ;)
> On 12 Jul 2016 7:53 pm, "Raja.Aravapalli" <Ra...@target.com>
> wrote:
>
>>
>> Hi,
>>
>> My DAG is failing with memory issues for container. Seeing below
>> information in the log.
>>
>>
>>
>> Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is
>> running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
>> physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing
>> container.
>>
>>
>> Can someone help me on how I can fix this issue. Thanks a lot.
>>
>>
>>
>> Regards,
>> Raja.
>>
>

Re: DAG is failing due to memory issues

Posted by Aniruddha Thombare <an...@datatorrent.com>.
Hi,

Can you check your YARN memory configuration?

This may help you:
http://www.cloudera.com/documentation/enterprise/5-5-x/topics/cdh_ig_yarn_tuning.html

Thanks,

A

_____________________________________
Sent with difficulty, I mean handheld ;)
On 12 Jul 2016 7:53 pm, "Raja.Aravapalli" <Ra...@target.com>
wrote:

>
> Hi,
>
> My DAG is failing with memory issues for container. Seeing below
> information in the log.
>
>
>
> Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is
> running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
> physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing
> container.
>
>
> Can someone help me on how I can fix this issue. Thanks a lot.
>
>
>
> Regards,
> Raja.
>

Re: DAG is failing due to memory issues

Posted by Sairam Kannan <sk...@hawk.iit.edu>.
Hi Raja,
                Try if this helps. Setting up this property in
yarn-site.xml to more than 10. It specifies the maximum percent of
resources in the cluster which can be used to run the application The
default will be 0.1

<property>
  <name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
  <value>100</value>
</property>

Thanks and Regards,

Sairam Kannan


On Tue, Jul 12, 2016 at 8:57 AM, Raja.Aravapalli <Raja.Aravapalli@target.com
> wrote:

>
> Hi,
>
> My DAG is failing with memory issues for container. Seeing below
> information in the log.
>
>
>
> Diagnostics: Container [pid=xxx,containerID=container_xyclksdjf] is
> running beyond physical memory limits. Current usage: 1.0 GB of 1 GB
> physical memory used; 2.9 GB of 2.1 GB virtual memory used. Killing
> container.
>
>
> Can someone help me on how I can fix this issue. Thanks a lot.
>
>
>
> Regards,
> Raja.
>