You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2022/07/16 05:44:05 UTC

[GitHub] [flink] leozhangsr commented on a diff in pull request #20234: [FLINK-28475] [Connector/kafka] Stopping offset can be 0

leozhangsr commented on code in PR #20234:
URL: https://github.com/apache/flink/pull/20234#discussion_r922636336


##########
flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/source/split/KafkaPartitionSplitSerializerTest.java:
##########
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.source.split;
+
+import org.apache.kafka.common.TopicPartition;
+import org.assertj.core.util.Lists;
+import org.junit.jupiter.api.Test;
+
+import java.io.IOException;
+import java.util.List;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/** Tests for {@link KafkaPartitionSplitSerializer}. */
+public class KafkaPartitionSplitSerializerTest {

Review Comment:
   yes, that's what we want: if the stopping offset of a split is set to 0, no message from that split will be consumed.
   I check this code again, and make a explanation for my change and test case.
   As we know, split is defined by the driver, then is serialized and send to task manager, then handled by KafkaPartitionSplitReader.The split reader cosumes messages, and stops at the stopping offset if the stopping offset is set.
   
   To achieve this, following key steps have to be validated:
   1、split is correctly serialized and send to split reader.
   2、split reader parse the split correctly
   3、split reader consumes and stop at the stopping offset.
   
   Step 1 is validated by test case I had make.
   Step 2 and 3 , can be validated by KafkaPartitionSplitReaderTest.testHandleSplitChangesAndFetch-assignSplitsAndFetchUntilFinish. This test make sure the split reader stop at the stopping offset.This test case set the stopping offset to 10(NUM_RECORDS_PER_PARTITION).Though the stopping offset is not 0, it still can work well if the split is parse correctly. KafkaPartitionSplitReader.parseStoppingOffsets acquires stoppingOffset to be >= 0,LATEST_OFFSET,COMMITTED_OFFSET. The KafkaPartitionSplit.getStoppingOffset reachs the same conditions after my modification.
   Step 2 and 3, is also validated by KafkaPartitionSplitReaderTest.testAssignEmptySplit for empty split situation.Generally when the stopping offset is 0, the starting offset might by 0 too, which means it's a empty split,it should consume nothing  and stop.In this test case, the empty split' starting offset is LATEST_OFFSET, stopping offset is LATEST_OFFSET.
   
   I only add a test case for step 1, since it's nerver tested before.I think the existing tests can already covered the situation(if the stopping offset of a split is set to 0, no message from that split will be consumed).Do you agree? Should I add a test case just for stopping offset = 0? Would it be a little redundance?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org