You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "poseidon (JIRA)" <ji...@apache.org> on 2017/09/08 08:49:00 UTC

[jira] [Created] (SPARK-21955) OneForOneStreamManager may leak memory when network is poor

poseidon created SPARK-21955:
--------------------------------

             Summary: OneForOneStreamManager may leak memory when network is poor
                 Key: SPARK-21955
                 URL: https://issues.apache.org/jira/browse/SPARK-21955
             Project: Spark
          Issue Type: Bug
          Components: Block Manager
    Affects Versions: 1.6.1
         Environment: hdp 2.4.2.0-258 
spark 1.6 
            Reporter: poseidon


just in my way to know how  stream , chunk , block works in netty found some nasty case.

process OpenBlocks message registerStream Stream in OneForOneStreamManager
org.apache.spark.network.server.OneForOneStreamManager#registerStream
fill with streamState with app & buber 

process  ChunkFetchRequest registerChannel
org.apache.spark.network.server.OneForOneStreamManager#registerChannel
fill with streamState with channel 

In 
org.apache.spark.network.shuffle.OneForOneBlockFetcher#start 

OpenBlocks  -> ChunkFetchRequest   come in sequnce. 

If network down in OpenBlocks  process, no more ChunkFetchRequest  message then. 

So, we can see some leaked Buffer in OneForOneStreamManager

!attachment-name.jpg|thumbnail!

if org.apache.spark.network.server.OneForOneStreamManager.StreamState#associatedChannel  is not set, then after search the code , it will remain in memory forever. 

Because the only way to release it was in channel close , or someone read the last piece of block. 


OneForOneStreamManager#registerStream we can set channel in this method, just in case of this case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org