You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "ice bai (JIRA)" <ji...@apache.org> on 2018/04/23 10:42:00 UTC
[jira] [Commented] (SPARK-20426) OneForOneStreamManager occupies
too much memory.
[ https://issues.apache.org/jira/browse/SPARK-20426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16447935#comment-16447935 ]
ice bai commented on SPARK-20426:
---------------------------------
refer with the following issue:
https://issues.apache.org/jira/browse/SPARK-20994
> OneForOneStreamManager occupies too much memory.
> ------------------------------------------------
>
> Key: SPARK-20426
> URL: https://issues.apache.org/jira/browse/SPARK-20426
> Project: Spark
> Issue Type: Improvement
> Components: Shuffle
> Affects Versions: 2.1.0
> Reporter: jin xing
> Assignee: jin xing
> Priority: Major
> Fix For: 2.2.0
>
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> Spark jobs are running on yarn cluster in my warehouse. We enabled the external shuffle service(*--conf spark.shuffle.service.enabled=true*). Recently NodeManager runs OOM now and then. Dumping heap memory, we find that *OneFroOneStreamManager*'s footprint is huge. NodeManager is configured with 5G heap memory. While *OneForOneManager* costs 2.5G and there are 5503233 *FileSegmentManagedBuffer* objects. Is there any suggestions to avoid this other than just keep increasing NodeManager's memory? Is it possible to stop *registerStream* in OneForOneStreamManager? Thus we don't need to cache so many metadatas(i.e. StreamState).
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org