You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Josh Rosen (JIRA)" <ji...@apache.org> on 2018/05/15 16:26:00 UTC
[jira] [Comment Edited] (SPARK-24107) ChunkedByteBuffer.writeFully
method has not reset the limit value
[ https://issues.apache.org/jira/browse/SPARK-24107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16475350#comment-16475350 ]
Josh Rosen edited comment on SPARK-24107 at 5/15/18 4:25 PM:
-------------------------------------------------------------
To work around this bug on unpatched / unhotfixed Spark 2.3.x releases, you can set the following Spark configuration at SparkContext creation time:
{{spark.buffer.write.chunkSize 2147483647}}
This effectively undoes the effects of SPARK-21527.
was (Author: joshrosen):
To work around this bug on unpatched / unhotfixed Spark 2.3.x releases, you can set the following Spark configuration at SparkContext creation time:
{{spark.buffer.write.chunkSize 147483647}}
This effectively undoes the effects of SPARK-21527.
> ChunkedByteBuffer.writeFully method has not reset the limit value
> -----------------------------------------------------------------
>
> Key: SPARK-24107
> URL: https://issues.apache.org/jira/browse/SPARK-24107
> Project: Spark
> Issue Type: Bug
> Components: Block Manager, Input/Output
> Affects Versions: 2.3.0
> Reporter: wangjinhai
> Assignee: wangjinhai
> Priority: Blocker
> Labels: correctness
> Fix For: 2.3.1, 2.4.0
>
>
> ChunkedByteBuffer.writeFully method has not reset the limit value. When
> chunks larger than bufferWriteChunkSize, such as 80*1024*1024 larger than
> config.BUFFER_WRITE_CHUNK_SIZE(64 * 1024 * 1024),only while once, will lost 16*1024*1024 byte
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org