You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "DENG FEI (JIRA)" <ji...@apache.org> on 2018/08/08 12:37:00 UTC

[jira] [Commented] (SPARK-25055) MessageWithHeader transfer ByteBuffer from Netty's CompositeByteBuf many times

    [ https://issues.apache.org/jira/browse/SPARK-25055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16573154#comment-16573154 ] 

DENG FEI commented on SPARK-25055:
----------------------------------

{code:java}
private int copyByteBuf(ByteBuf buf, WritableByteChannel target) throws IOException {
 ByteBuffer buffer = buf.nioBuffer();
 int totalWritten = 0;
 while(buffer.remaining() > 0) {
 int written = (buffer.remaining() <= NIO_BUFFER_LIMIT) ?
 target.write(buffer) : writeNioBuffer(target, buffer);
 buf.skipBytes(written);
 totalWritten += written;
 }
 return totalWritten;
}{code}

> MessageWithHeader transfer ByteBuffer from Netty's CompositeByteBuf many times
> ------------------------------------------------------------------------------
>
>                 Key: SPARK-25055
>                 URL: https://issues.apache.org/jira/browse/SPARK-25055
>             Project: Spark
>          Issue Type: Bug
>          Components: Block Manager
>    Affects Versions: 2.3.1
>            Reporter: DENG FEI
>            Priority: Major
>
> MessageWithHeader transfer header and body if they are ByteBuf, in the case of  fetch remote big block with greater than 'NIO_BUFFER_LIMIT', because of ChunkedByteBuffer#toNetty avoid consolidate, CompositeByteBuf.nioByteBuf will allocate from heap every times,  lead to fetch timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org