You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2017/06/06 06:52:18 UTC

[jira] [Commented] (SPARK-20994) Alleviate memory pressure in StreamManager

    [ https://issues.apache.org/jira/browse/SPARK-20994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16038280#comment-16038280 ] 

Apache Spark commented on SPARK-20994:
--------------------------------------

User 'jinxing64' has created a pull request for this issue:
https://github.com/apache/spark/pull/18211

> Alleviate memory pressure in StreamManager
> ------------------------------------------
>
>                 Key: SPARK-20994
>                 URL: https://issues.apache.org/jira/browse/SPARK-20994
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.1.1
>            Reporter: jin xing
>
> In my cluster, we are suffering from OOM of shuffle-service.
> We found that a lot of executors are fetching blocks from a single shuffle-service. Analyzing the memory, we found that the blockIds({{shuffle_shuffleId_mapId_reduceId}}) takes about 1.5GBytes.
> In current code, chunks are fetched from shuffle service in two steps:
> Step-1. Send {{OpenBlocks}}, which contains the blocks list to to fetch;
> Step-2. Fetch the consecutive chunks from shuffle-service by {{streamId}} and {{chunkIndex}}
> Conceptually, there is no need to send the blocks list in step-1. Client can send the blockId in Step-2. Receiving {{ChunkFetchRequest}}, server can check if the chunkId is in local block manager and send back response. 
> Thus memory cost can be improved.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org