You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Nicolas Fraison (JIRA)" <ji...@apache.org> on 2017/09/04 14:02:00 UTC

[jira] [Commented] (MAPREDUCE-6659) Mapreduce App master waits long to kill containers on lost nodes.

    [ https://issues.apache.org/jira/browse/MAPREDUCE-6659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152640#comment-16152640 ] 

Nicolas Fraison commented on MAPREDUCE-6659:
--------------------------------------------

We have developed a patch that should help on this issue by managing NodeManager lost events on AM as described below:
* on nodemanager service unavailibility (crash, oom ...):
		When receiving lost NodeManager events, it failed the impacted attempt and do not go through the cleanup stage.

* on nodemanager server unavailibility with default settings AM detect first that the attempt is in timeout and try to cleanup the attempt:
		When receiving lost NodeManager events, it stop the cleanup process on the impacted container and failed the attempt.

This reduce the duration of the timeout to the timeout for detecting a down NodeManager.
Could you please provide me rights to attached the patch in this request?

> Mapreduce App master waits long to kill containers on lost nodes.
> -----------------------------------------------------------------
>
>                 Key: MAPREDUCE-6659
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6659
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: mr-am
>    Affects Versions: 2.6.0
>            Reporter: Laxman
>            Assignee: Jun Gong
>
> MR Application master waits for very long time to cleanup and relaunch the tasks on lost nodes. Wait time is actually 2.5 hours (ipc.client.connect.max.retries * ipc.client.connect.max.retries.on.timeouts * ipc.client.connect.timeout = 10 * 45 * 20 = 9000 seconds = 2.5 hours)
> Some similar issue related in RM-AM rpc protocol is fixed in YARN-3809.
> As fixed in YARN-3809, we may need to introduce new configurations to control this RPC retry behavior.
> Also, I feel this total retry time should honor and capped maximum to global task time out (mapreduce.task.timeout = 600000 default)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-help@hadoop.apache.org