You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/03/11 16:27:34 UTC

[GitHub] [spark] ankuriitg commented on a change in pull request #23453: [SPARK-26089][CORE] Handle corruption in large shuffle blocks

ankuriitg commented on a change in pull request #23453: [SPARK-26089][CORE] Handle corruption in large shuffle blocks
URL: https://github.com/apache/spark/pull/23453#discussion_r264317097
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
 ##########
 @@ -466,16 +469,19 @@ final class ShuffleBlockFetcherIterator(
           var isStreamCopied: Boolean = false
           try {
             input = streamWrapper(blockId, in)
-            // Only copy the stream if it's wrapped by compression or encryption, also the size of
-            // block is small (the decompressed block is smaller than maxBytesInFlight)
-            if (detectCorrupt && !input.eq(in) && size < maxBytesInFlight / 3) {
+            // Only copy the stream if it's wrapped by compression or encryption upto a size of
+            // maxBytesInFlight/3. If stream is longer, then corruption will be caught while reading
+            // the stream.
+            streamCompressedOrEncrypted = !input.eq(in)
+            if (streamCompressedOrEncrypted && detectCorruptUseExtraMemory) {
               isStreamCopied = true
-              val out = new ChunkedByteBufferOutputStream(64 * 1024, ByteBuffer.allocate)
-              // Decompress the whole block at once to detect any corruption, which could increase
-              // the memory usage tne potential increase the chance of OOM.
+              // Decompress the block upto maxBytesInFlight/3 at once to detect any corruption which
+              // could increase the memory usage and potentially increase the chance of OOM.
               // TODO: manage the memory used here, and spill it into disk in case of OOM.
-              Utils.copyStream(input, out, closeStreams = true)
-              input = out.toChunkedByteBuffer.toInputStream(dispose = true)
+              val (fullyCopied: Boolean, mergedStream: InputStream) = Utils.copyStreamUpTo(
+                input, maxBytesInFlight / 3)
 
 Review comment:
   I think in.close() is needed if there is an exception while creating a wrapped stream. The first time I saw `isStreamCopied`, I was also confused and by looking at it more closely now, I realize that it is not doing what it is supposed to do.
   
   I have removed `isStreamCopied` and instead used another condition to close the stream. Please check and let me know if it makes sense.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org