You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-issues@hadoop.apache.org by "Chen He (JIRA)" <ji...@apache.org> on 2017/10/03 01:16:00 UTC

[jira] [Commented] (YARN-5140) NM usercache fill up with burst of jobs leading to rapid temp IO FS fill up and potentially NM outage

    [ https://issues.apache.org/jira/browse/YARN-5140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16189144#comment-16189144 ] 

Chen He commented on YARN-5140:
-------------------------------

Hi [~okalinin], this is a interesting issue. According to the description, if I understand correctly, could we avoid multiple NM crash if we reduce the "yarn.nodemanager.localizer.cache.cleanup.interval-ms" and increasing "yarn.nodemanager.localizer.cache.target-size-mb"?

> NM usercache fill up with burst of jobs leading to rapid temp IO FS fill up and potentially NM outage
> -----------------------------------------------------------------------------------------------------
>
>                 Key: YARN-5140
>                 URL: https://issues.apache.org/jira/browse/YARN-5140
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: nodemanager
>    Affects Versions: 2.7.0
>         Environment: Linux RHEL 6.7, Hadoop 2.7.0
>            Reporter: Oleksandr Kalinin
>            Priority: Minor
>
> A burst or rapid rate of submitted jobs with substantial NM usercache resource localization footprint may lead to rapid fill up of the NM local temporary IO FS (/tmp by default) with negative consequences in terms of stability.
> The core issue seems to be the fact that NM continues to localize the resources beyond the maximum local cache size (yarn.nodemanager.localizer.cache.target-size-mb , default 10G). Since maximum local cache size is effectively not taken into account when localizing new resources (note that default cache cleanup interval is 10 min controlled by yarn.nodemanager.localizer.cache.cleanup.interval-ms), this basically leads to sort of self-destruction scenario : once /tmp FS utilization reaches the threshold of 90%, NM will automatically de-register from RM, effectively leading to NM outage.
> This issue may offline many NMs simultaneously at the same time and thus is quite critical in terms of platform stability.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org