You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by kiszk <gi...@git.apache.org> on 2018/08/14 18:26:42 UTC

[GitHub] spark pull request #22105: [SPARK-25115] [Core] Eliminate extra memory copy ...

Github user kiszk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22105#discussion_r210056504
  
    --- Diff: common/network-common/src/main/java/org/apache/spark/network/protocol/MessageWithHeader.java ---
    @@ -140,8 +140,24 @@ private int copyByteBuf(ByteBuf buf, WritableByteChannel target) throws IOExcept
         // SPARK-24578: cap the sub-region's size of returned nio buffer to improve the performance
         // for the case that the passed-in buffer has too many components.
         int length = Math.min(buf.readableBytes(), NIO_BUFFER_LIMIT);
    -    ByteBuffer buffer = buf.nioBuffer(buf.readerIndex(), length);
    -    int written = target.write(buffer);
    +    // If the ByteBuf holds more then one ByteBuffer we should better call nioBuffers(...)
    +    // to eliminate extra memory copies.
    +    int written = 0;
    +    if (buf.nioBufferCount() == 1) {
    +      ByteBuffer buffer = buf.nioBuffer(buf.readerIndex(), length);
    +      written = target.write(buffer);
    +    } else {
    +      ByteBuffer[] buffers = buf.nioBuffers(buf.readerIndex(), length);
    +      for (ByteBuffer buffer: buffers) {
    +        int remaining = buffer.remaining();
    +        int w = target.write(buffer);
    +        written += w;
    --- End diff --
    
    Can we guarantee `written` does not overflow while we accumulate `int` values? 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org