You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2020/04/26 03:27:06 UTC

[GitHub] [flink] zhijiangW commented on a change in pull request #11814: [FLINK-17218][tests] Adding recoverable failures and correctness chec…

zhijiangW commented on a change in pull request #11814:
URL: https://github.com/apache/flink/pull/11814#discussion_r415202630



##########
File path: flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/serialization/SpillingAdaptiveSpanningRecordDeserializer.java
##########
@@ -597,12 +597,12 @@ private void addNextChunkFromMemorySegment(MemorySegment segment, int offset, in
 				throw new UnsupportedOperationException("Unaligned checkpoint currently do not support spilled " +
 					"records.");
 			} else if (recordLength != -1) {
-				int leftOverSize = leftOverLimit - leftOverStart;
+				int leftOverSize = leftOverData != null ? leftOverLimit - leftOverStart : 0;

Review comment:
       Thanks for finding this bug!
   
   I think the root cause was the state inconsistency among `{leftOverLimit, leftOverStart}` with `leftOverData`. During `#clear()` we only reset `leftOverData` as null, but now reset the derived `{leftOverLimit, leftOverStart}` from `leftOverData`. So we can check the condition only by `leftOverData`. Maybe we can also reset `{leftOverLimit, leftOverStart}` during `#clear()` to keep all consistency.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org