You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Anton Ippolitov (Jira)" <ji...@apache.org> on 2019/10/25 15:31:00 UTC

[jira] [Commented] (SPARK-28743) YarnShuffleService leads to NodeManager OOM because ChannelOutboundBuffer has too many entries

    [ https://issues.apache.org/jira/browse/SPARK-28743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959858#comment-16959858 ] 

Anton Ippolitov commented on SPARK-28743:
-----------------------------------------

Hi!

We are seeing the exact same issue with Spark 2.4.4. More specifically, the issue arises only for a handful of our Spark jobs and only when we enable transport encryption ({{spark.network.crypto.enabled}} set to {{true)}}. We are able to consistently reproduce the problem: i.e. the NodeManager OOMs every time we launch these particular jobs. When transport encryption is disabled, we don't see this issue anymore.

I have tried bumping the NodeManager's memory via {{YARN_NODEMANAGER_HEAPSIZE}} : I set it to 8GB, 16GB and 32GB but the NodeManager OOMs every time.

I captured a couple of thread dumps from the NodeManager and they look very similar to the one posted by [~yangjiandan]. (see screenshot)

Would anyone have any insight into this issue? I would be happy to provide more information if needed.

 

  !Screen Shot 2019-10-25 at 17.24.10.png!

 

> YarnShuffleService leads to NodeManager OOM because ChannelOutboundBuffer has too many entries
> ----------------------------------------------------------------------------------------------
>
>                 Key: SPARK-28743
>                 URL: https://issues.apache.org/jira/browse/SPARK-28743
>             Project: Spark
>          Issue Type: Bug
>          Components: Shuffle
>    Affects Versions: 2.3.0
>            Reporter: Jiandan Yang 
>            Priority: Major
>         Attachments: Screen Shot 2019-10-25 at 17.24.10.png, dominator.jpg, histo.jpg
>
>
> NodeManager heap size is 4G, io.netty.channel.ChannelOutboundBuffer$Entry occupied about 2.8G by looking at Histogram of Mat, and those Entries were hold by ChannelOutboundBuffer by looking at dominator_tree of mat. By analyzing  one fo ChannelOutboundBuffer object, I found there were 248867 entries in the object of ChannelOutboundBuffer (ChannelOutboundBuffer#flushed=248867), and  ChannelOutboundBuffer#totalPengdingSize=23891232 which is more than highwaterMark(64K), and unwritable=1 meaning sending buffer was full.  But ChannelHandler seems not check unwritable flag when write message, and finally NodeManager occurs OOM.
> Histogram:
> !histo.jpg!
> dominator_tree:
> !dominator.jpg!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org