You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2016/03/31 11:59:25 UTC

[jira] [Commented] (SPARK-14290) Fully utilize the network bandwidth for Netty RPC by avoid significant underlying memory copy

    [ https://issues.apache.org/jira/browse/SPARK-14290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15219692#comment-15219692 ] 

Apache Spark commented on SPARK-14290:
--------------------------------------

User 'liyezhang556520' has created a pull request for this issue:
https://github.com/apache/spark/pull/12083

> Fully utilize the network bandwidth for Netty RPC by avoid significant underlying memory copy
> ---------------------------------------------------------------------------------------------
>
>                 Key: SPARK-14290
>                 URL: https://issues.apache.org/jira/browse/SPARK-14290
>             Project: Spark
>          Issue Type: Improvement
>          Components: Input/Output, Spark Core
>    Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0, 2.0.0
>            Reporter: Zhang, Liye
>
> When netty transfer data that is not from *FileRegion*, data will be transfered as *ByteBuf*, If the data is large, there will occur significant performance issue because there is memory copy underlying in *sun.nio.ch.IOUtil.write*, the CPU is 100% used, and network is very low. We can check it by comparing *NIO* and *Netty* for *spark.shuffle.blockTransferService* in spark 1.4. NIO network bandwidth is much better than Netty.
> How to reproduce:
> {code}
> sc.parallelize(Array(1,2,3),3).mapPartitions(a=>Array(new Array[Double](1024 * 1024 * 50)).iterator).reduce((a,b)=> a).length
> {code}
> The root cause can referred [here|http://stackoverflow.com/questions/34493320/how-does-buffer-size-affect-nio-channel-performance]. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org