You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by scwf <wa...@huawei.com> on 2014/12/10 06:39:31 UTC

Question about container recovery

Hi, all
   Here is my question: is there a mechanisms that when one container exit abnormally, yarn will prefer to dispatch the container on other NM?

We have a cluster with 3 NMs(each NM 135g mem) and 1 RM, and we running a job which start 13 container(= 1 AM + 12 executor containers).

Each NM has 4 executor container and the mem configured for each executor container is 30g. There is a interesting test, when we killed

4 containers in one NM1, only 2 containers restarted on NM1, other 2 containers reserved on the NM2 and NM3.

   Any idea?

Fei.


Re: Question about container recovery

Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
Replies inline

>  Here is my question: is there a mechanisms that when one container exit abnormally, yarn will prefer to dispatch the container on other NM?


Acting on container exit is a responsibility left to ApplicationMasters. For e.g. MapReduce ApplicationMaster explicitly tells YARN to NOT launch a task on the same machine where it failed before.


> We have a cluster with 3 NMs(each NM 135g mem) and 1 RM, and we running a job which start 13 container(= 1 AM + 12 executor containers).
> 
> Each NM has 4 executor container and the mem configured for each executor container is 30g. There is a interesting test, when we killed
> 
> 4 containers in one NM1, only 2 containers restarted on NM1, other 2 containers reserved on the NM2 and NM3.


Which application is this?

Was the app stuck waiting for those reservations to be fulfilled?

+Vinod


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Question about container recovery

Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
Replies inline

>  Here is my question: is there a mechanisms that when one container exit abnormally, yarn will prefer to dispatch the container on other NM?


Acting on container exit is a responsibility left to ApplicationMasters. For e.g. MapReduce ApplicationMaster explicitly tells YARN to NOT launch a task on the same machine where it failed before.


> We have a cluster with 3 NMs(each NM 135g mem) and 1 RM, and we running a job which start 13 container(= 1 AM + 12 executor containers).
> 
> Each NM has 4 executor container and the mem configured for each executor container is 30g. There is a interesting test, when we killed
> 
> 4 containers in one NM1, only 2 containers restarted on NM1, other 2 containers reserved on the NM2 and NM3.


Which application is this?

Was the app stuck waiting for those reservations to be fulfilled?

+Vinod


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Question about container recovery

Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
Is this MapReduce application?

MR has a concept of blacklisting nodes where a lot of tasks fail. The configs that control it are
 - yarn.app.mapreduce.am.job.node-blacklisting.enable: True by default
 - mapreduce.job.maxtaskfailures.per.tracker: Default is 3, meaning a node is blacklisted if it fails 3 tasks
 - yarn.app.mapreduce.am.job.node-blacklisting.ignore-threshold-node-percent: 33% by default, meaning blacklists will be ignored if 33% of cluster is already blacklisted 

+Vinod

On Dec 10, 2014, at 12:59 AM, scwf <wa...@huawei.com> wrote:

> It seems there is a blacklist in yarn when all containers of one NM lost, it will add this NM to blacklist? Then when will the NM go out of blacklist?
> 
> On 2014/12/10 13:39, scwf wrote:
>> Hi, all
>>   Here is my question: is there a mechanisms that when one container exit abnormally, yarn will prefer to dispatch the container on other NM?
>> 
>> We have a cluster with 3 NMs(each NM 135g mem) and 1 RM, and we running a job which start 13 container(= 1 AM + 12 executor containers).
>> 
>> Each NM has 4 executor container and the mem configured for each executor container is 30g. There is a interesting test, when we killed
>> 
>> 4 containers in one NM1, only 2 containers restarted on NM1, other 2 containers reserved on the NM2 and NM3.
>> 
>>   Any idea?
>> 
>> Fei.
>> 
>> 
>> 
> 
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Question about container recovery

Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
Is this MapReduce application?

MR has a concept of blacklisting nodes where a lot of tasks fail. The configs that control it are
 - yarn.app.mapreduce.am.job.node-blacklisting.enable: True by default
 - mapreduce.job.maxtaskfailures.per.tracker: Default is 3, meaning a node is blacklisted if it fails 3 tasks
 - yarn.app.mapreduce.am.job.node-blacklisting.ignore-threshold-node-percent: 33% by default, meaning blacklists will be ignored if 33% of cluster is already blacklisted 

+Vinod

On Dec 10, 2014, at 12:59 AM, scwf <wa...@huawei.com> wrote:

> It seems there is a blacklist in yarn when all containers of one NM lost, it will add this NM to blacklist? Then when will the NM go out of blacklist?
> 
> On 2014/12/10 13:39, scwf wrote:
>> Hi, all
>>   Here is my question: is there a mechanisms that when one container exit abnormally, yarn will prefer to dispatch the container on other NM?
>> 
>> We have a cluster with 3 NMs(each NM 135g mem) and 1 RM, and we running a job which start 13 container(= 1 AM + 12 executor containers).
>> 
>> Each NM has 4 executor container and the mem configured for each executor container is 30g. There is a interesting test, when we killed
>> 
>> 4 containers in one NM1, only 2 containers restarted on NM1, other 2 containers reserved on the NM2 and NM3.
>> 
>>   Any idea?
>> 
>> Fei.
>> 
>> 
>> 
> 
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Question about container recovery

Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
Is this MapReduce application?

MR has a concept of blacklisting nodes where a lot of tasks fail. The configs that control it are
 - yarn.app.mapreduce.am.job.node-blacklisting.enable: True by default
 - mapreduce.job.maxtaskfailures.per.tracker: Default is 3, meaning a node is blacklisted if it fails 3 tasks
 - yarn.app.mapreduce.am.job.node-blacklisting.ignore-threshold-node-percent: 33% by default, meaning blacklists will be ignored if 33% of cluster is already blacklisted 

+Vinod

On Dec 10, 2014, at 12:59 AM, scwf <wa...@huawei.com> wrote:

> It seems there is a blacklist in yarn when all containers of one NM lost, it will add this NM to blacklist? Then when will the NM go out of blacklist?
> 
> On 2014/12/10 13:39, scwf wrote:
>> Hi, all
>>   Here is my question: is there a mechanisms that when one container exit abnormally, yarn will prefer to dispatch the container on other NM?
>> 
>> We have a cluster with 3 NMs(each NM 135g mem) and 1 RM, and we running a job which start 13 container(= 1 AM + 12 executor containers).
>> 
>> Each NM has 4 executor container and the mem configured for each executor container is 30g. There is a interesting test, when we killed
>> 
>> 4 containers in one NM1, only 2 containers restarted on NM1, other 2 containers reserved on the NM2 and NM3.
>> 
>>   Any idea?
>> 
>> Fei.
>> 
>> 
>> 
> 
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Question about container recovery

Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
Is this MapReduce application?

MR has a concept of blacklisting nodes where a lot of tasks fail. The configs that control it are
 - yarn.app.mapreduce.am.job.node-blacklisting.enable: True by default
 - mapreduce.job.maxtaskfailures.per.tracker: Default is 3, meaning a node is blacklisted if it fails 3 tasks
 - yarn.app.mapreduce.am.job.node-blacklisting.ignore-threshold-node-percent: 33% by default, meaning blacklists will be ignored if 33% of cluster is already blacklisted 

+Vinod

On Dec 10, 2014, at 12:59 AM, scwf <wa...@huawei.com> wrote:

> It seems there is a blacklist in yarn when all containers of one NM lost, it will add this NM to blacklist? Then when will the NM go out of blacklist?
> 
> On 2014/12/10 13:39, scwf wrote:
>> Hi, all
>>   Here is my question: is there a mechanisms that when one container exit abnormally, yarn will prefer to dispatch the container on other NM?
>> 
>> We have a cluster with 3 NMs(each NM 135g mem) and 1 RM, and we running a job which start 13 container(= 1 AM + 12 executor containers).
>> 
>> Each NM has 4 executor container and the mem configured for each executor container is 30g. There is a interesting test, when we killed
>> 
>> 4 containers in one NM1, only 2 containers restarted on NM1, other 2 containers reserved on the NM2 and NM3.
>> 
>>   Any idea?
>> 
>> Fei.
>> 
>> 
>> 
> 
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Question about container recovery

Posted by scwf <wa...@huawei.com>.
It seems there is a blacklist in yarn when all containers of one NM lost, it will add this NM to blacklist? Then when will the NM go out of blacklist?

On 2014/12/10 13:39, scwf wrote:
> Hi, all
>    Here is my question: is there a mechanisms that when one container exit abnormally, yarn will prefer to dispatch the container on other NM?
>
> We have a cluster with 3 NMs(each NM 135g mem) and 1 RM, and we running a job which start 13 container(= 1 AM + 12 executor containers).
>
> Each NM has 4 executor container and the mem configured for each executor container is 30g. There is a interesting test, when we killed
>
> 4 containers in one NM1, only 2 containers restarted on NM1, other 2 containers reserved on the NM2 and NM3.
>
>    Any idea?
>
> Fei.
>
>
>



Re: Question about container recovery

Posted by scwf <wa...@huawei.com>.
It seems there is a blacklist in yarn when all containers of one NM lost, it will add this NM to blacklist? Then when will the NM go out of blacklist?

On 2014/12/10 13:39, scwf wrote:
> Hi, all
>    Here is my question: is there a mechanisms that when one container exit abnormally, yarn will prefer to dispatch the container on other NM?
>
> We have a cluster with 3 NMs(each NM 135g mem) and 1 RM, and we running a job which start 13 container(= 1 AM + 12 executor containers).
>
> Each NM has 4 executor container and the mem configured for each executor container is 30g. There is a interesting test, when we killed
>
> 4 containers in one NM1, only 2 containers restarted on NM1, other 2 containers reserved on the NM2 and NM3.
>
>    Any idea?
>
> Fei.
>
>
>



Re: Question about container recovery

Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
Replies inline

>  Here is my question: is there a mechanisms that when one container exit abnormally, yarn will prefer to dispatch the container on other NM?


Acting on container exit is a responsibility left to ApplicationMasters. For e.g. MapReduce ApplicationMaster explicitly tells YARN to NOT launch a task on the same machine where it failed before.


> We have a cluster with 3 NMs(each NM 135g mem) and 1 RM, and we running a job which start 13 container(= 1 AM + 12 executor containers).
> 
> Each NM has 4 executor container and the mem configured for each executor container is 30g. There is a interesting test, when we killed
> 
> 4 containers in one NM1, only 2 containers restarted on NM1, other 2 containers reserved on the NM2 and NM3.


Which application is this?

Was the app stuck waiting for those reservations to be fulfilled?

+Vinod


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Question about container recovery

Posted by Vinod Kumar Vavilapalli <vi...@hortonworks.com>.
Replies inline

>  Here is my question: is there a mechanisms that when one container exit abnormally, yarn will prefer to dispatch the container on other NM?


Acting on container exit is a responsibility left to ApplicationMasters. For e.g. MapReduce ApplicationMaster explicitly tells YARN to NOT launch a task on the same machine where it failed before.


> We have a cluster with 3 NMs(each NM 135g mem) and 1 RM, and we running a job which start 13 container(= 1 AM + 12 executor containers).
> 
> Each NM has 4 executor container and the mem configured for each executor container is 30g. There is a interesting test, when we killed
> 
> 4 containers in one NM1, only 2 containers restarted on NM1, other 2 containers reserved on the NM2 and NM3.


Which application is this?

Was the app stuck waiting for those reservations to be fulfilled?

+Vinod


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Question about container recovery

Posted by scwf <wa...@huawei.com>.
It seems there is a blacklist in yarn when all containers of one NM lost, it will add this NM to blacklist? Then when will the NM go out of blacklist?

On 2014/12/10 13:39, scwf wrote:
> Hi, all
>    Here is my question: is there a mechanisms that when one container exit abnormally, yarn will prefer to dispatch the container on other NM?
>
> We have a cluster with 3 NMs(each NM 135g mem) and 1 RM, and we running a job which start 13 container(= 1 AM + 12 executor containers).
>
> Each NM has 4 executor container and the mem configured for each executor container is 30g. There is a interesting test, when we killed
>
> 4 containers in one NM1, only 2 containers restarted on NM1, other 2 containers reserved on the NM2 and NM3.
>
>    Any idea?
>
> Fei.
>
>
>



Re: Question about container recovery

Posted by scwf <wa...@huawei.com>.
It seems there is a blacklist in yarn when all containers of one NM lost, it will add this NM to blacklist? Then when will the NM go out of blacklist?

On 2014/12/10 13:39, scwf wrote:
> Hi, all
>    Here is my question: is there a mechanisms that when one container exit abnormally, yarn will prefer to dispatch the container on other NM?
>
> We have a cluster with 3 NMs(each NM 135g mem) and 1 RM, and we running a job which start 13 container(= 1 AM + 12 executor containers).
>
> Each NM has 4 executor container and the mem configured for each executor container is 30g. There is a interesting test, when we killed
>
> 4 containers in one NM1, only 2 containers restarted on NM1, other 2 containers reserved on the NM2 and NM3.
>
>    Any idea?
>
> Fei.
>
>
>