You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Nihar Sheth (JIRA)" <ji...@apache.org> on 2018/08/13 23:41:00 UTC
[jira] [Commented] (SPARK-24938) Understand usage of netty's onheap
memory use, even with offheap pools
[ https://issues.apache.org/jira/browse/SPARK-24938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16579039#comment-16579039 ]
Nihar Sheth commented on SPARK-24938:
-------------------------------------
After making the change and running the tool on a very simple application both with and without that change, I saw 3 netty services that dropped from 16mb to 0. They are:
netty-rpc-client-usedHeapMem
netty-blockTransfer-client-usedHeapMem
netty-external-shuffle-client-usedHeapMem
16mb of onheap memory was allocated for these three services through their lifetime without the change, but with the change it disappears in all 3 cases.
Does this sound like the sole source of this particular issue? Or would you expect more memory elsewhere to also be freed up?
> Understand usage of netty's onheap memory use, even with offheap pools
> ----------------------------------------------------------------------
>
> Key: SPARK-24938
> URL: https://issues.apache.org/jira/browse/SPARK-24938
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 2.4.0
> Reporter: Imran Rashid
> Priority: Major
> Labels: memory-analysis
>
> We've observed that netty uses large amount of onheap memory in its pools, in addition to the expected offheap memory when I added some instrumentation (using SPARK-24918 and https://github.com/squito/spark-memory). We should figure out why its using that memory, and whether its really necessary.
> It might be just this one line:
> https://github.com/apache/spark/blob/master/common/network-common/src/main/java/org/apache/spark/network/protocol/MessageEncoder.java#L82
> which means that even with a small burst of messages, each arena will grow by 16MB which could lead to a 128 MB spike of an almost entirely unused pool. Switching to requesting a buffer from the default pool would probably fix this.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org