You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2022/01/05 03:03:11 UTC

[GitHub] [flink] ashulin opened a new pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

ashulin opened a new pull request #18266:
URL: https://github.com/apache/flink/pull/18266


   <!--
   *Thank you very much for contributing to Apache Flink - we are happy that you want to help us improve Flink. To help the community review your contribution in the best possible way, please go through the checklist below, which will get the contribution into a shape in which it can be best reviewed.*
   
   *Please understand that we do not do this to make contributions to Flink a hassle. In order to uphold a high standard of quality for code contributions, while at the same time managing a large number of contributions, we need contributors to prepare the contributions well, and give reviewers enough contextual information for the review. Please also understand that contributions that do not follow this guide will take longer to review and thus typically be picked up with lower priority by the community.*
   
   ## Contribution Checklist
   
     - Make sure that the pull request corresponds to a [JIRA issue](https://issues.apache.org/jira/projects/FLINK/issues). Exceptions are made for typos in JavaDoc or documentation files, which need no JIRA issue.
     
     - Name the pull request in the form "[FLINK-XXXX] [component] Title of the pull request", where *FLINK-XXXX* should be replaced by the actual issue number. Skip *component* if you are unsure about which is the best component.
     Typo fixes that have no associated JIRA issue should be named following this pattern: `[hotfix] [docs] Fix typo in event time introduction` or `[hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator`.
   
     - Fill out the template below to describe the changes contributed by the pull request. That will give reviewers the context they need to do the review.
     
     - Make sure that the change passes the automated tests, i.e., `mvn clean verify` passes. You can set up Azure Pipelines CI to do that following [this guide](https://cwiki.apache.org/confluence/display/FLINK/Azure+Pipelines#AzurePipelines-Tutorial:SettingupAzurePipelinesforaforkoftheFlinkrepository).
   
     - Each pull request should address only one issue, not mix up code from multiple issues.
     
     - Each commit in the pull request has a meaningful commit message (including the JIRA id)
   
     - Once all items of the checklist are addressed, remove the above text and this checklist, leaving only the filled out template below.
   
   
   **(The sections below can be removed for hotfixes of typos)**
   -->
   
   ## What is the purpose of the change
   
   KafkaPartitionSplitReaderTest is missing the test case when startingOffset is COMMITTED_OFFSET.
   
   ## Brief change log
   
   Add using committed offsets test cases for KafkaPartitionSplitReader
   
   ## Verifying this change
   
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (yes / **no**)
     - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**)
     - The serializers: (yes / **no** / don't know)
     - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know)
     - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't know)
     - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (yes / **no**)
     - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] fapaul merged pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
fapaul merged pull request #18266:
URL: https://github.com/apache/flink/pull/18266


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] PatrickRen commented on a change in pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
PatrickRen commented on a change in pull request #18266:
URL: https://github.com/apache/flink/pull/18266#discussion_r779387799



##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReaderTest.java
##########
@@ -248,6 +253,100 @@ public void testAssignEmptySplit() throws Exception {
         assertTrue(recordsWithSplitIds.finishedSplits().isEmpty());
     }
 
+    @Test
+    public void testUsingCommittedOffsetsWithNoneOffsetResetStrategy() {
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG, "using-committed-offset-with-none-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(props, UnregisteredMetricsGroup.createSourceReaderMetricGroup());
+        // Add a committed offset split and catch kafka exception
+        final KafkaException undefinedOffsetException =
+                Assertions.assertThrows(
+                        KafkaException.class,
+                        () ->
+                                reader.handleSplitsChanges(
+                                        new SplitsAddition<>(
+                                                Collections.singletonList(
+                                                        new KafkaPartitionSplit(
+                                                                new TopicPartition(TOPIC1, 0),
+                                                                KafkaPartitionSplit
+                                                                        .COMMITTED_OFFSET)))));
+        MatcherAssert.assertThat(
+                undefinedOffsetException.getMessage(),
+                CoreMatchers.containsString("Undefined offset with no reset policy for partition"));
+    }
+
+    @Test
+    public void testUsingCommittedOffsetsWithEarliestOffsetResetStrategy() throws Throwable {
+        MetricListener metricListener = new MetricListener();
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
+                OffsetResetStrategy.EARLIEST.name().toLowerCase());
+        props.setProperty(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "1");
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG,
+                "using-committed-offset-with-earliest-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(
+                        props,
+                        InternalSourceReaderMetricGroup.mock(metricListener.getMetricGroup()));
+        // Add a committed offset split
+        reader.handleSplitsChanges(
+                new SplitsAddition<>(
+                        Collections.singletonList(
+                                new KafkaPartitionSplit(
+                                        new TopicPartition(TOPIC1, 0),
+                                        KafkaPartitionSplit.COMMITTED_OFFSET))));
+        // pendingRecords should have not been registered because of lazily registration
+        assertFalse(metricListener.getGauge(MetricNames.PENDING_RECORDS).isPresent());
+        // Trigger first fetch
+        reader.fetch();
+        final Optional<Gauge<Long>> pendingRecords =
+                metricListener.getGauge(MetricNames.PENDING_RECORDS);
+        assertTrue(pendingRecords.isPresent());
+        // Validate pendingRecords
+        assertNotNull(pendingRecords);
+        assertEquals(NUM_RECORDS_PER_PARTITION - 1, (long) pendingRecords.get().getValue());
+        for (int i = 1; i < NUM_RECORDS_PER_PARTITION; i++) {
+            reader.fetch();
+            assertEquals(NUM_RECORDS_PER_PARTITION - i - 1, (long) pendingRecords.get().getValue());
+        }
+    }
+
+    @Test
+    public void testUsingCommittedOffsetsWithLatestOffsetResetStrategy() throws Throwable {
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
+                OffsetResetStrategy.LATEST.name().toLowerCase());
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG, "using-committed-offset-with-latest-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(props, UnregisteredMetricsGroup.createSourceReaderMetricGroup());
+        // Add empty latest offset reset split
+        final KafkaPartitionSplit latestOffsetResetEmptySplit =
+                new KafkaPartitionSplit(
+                        new TopicPartition(TOPIC1, 0),
+                        KafkaPartitionSplit.COMMITTED_OFFSET,
+                        KafkaPartitionSplit.LATEST_OFFSET);
+        final KafkaPartitionSplit latestOffsetResetNormalSplit =
+                new KafkaPartitionSplit(
+                        new TopicPartition(TOPIC2, 0), KafkaPartitionSplit.COMMITTED_OFFSET);
+
+        reader.handleSplitsChanges(
+                new SplitsAddition<>(
+                        Arrays.asList(latestOffsetResetEmptySplit, latestOffsetResetNormalSplit)));
+
+        // Fetch and check latest offset reset split is added to finished splits
+        RecordsWithSplitIds<ConsumerRecord<byte[], byte[]>> recordsWithSplitIds = reader.fetch();
+        assertTrue(
+                recordsWithSplitIds
+                        .finishedSplits()
+                        .contains(latestOffsetResetEmptySplit.splitId()));

Review comment:
       What about making the split not bounded (not setting the ending offset)?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18266:
URL: https://github.com/apache/flink/pull/18266#issuecomment-1005342823


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28952",
       "triggerID" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6699bd3e79897191a2e275cf0d78cb6c320edd95",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28959",
       "triggerID" : "6699bd3e79897191a2e275cf0d78cb6c320edd95",
       "triggerType" : "PUSH"
     }, {
       "hash" : "3f0e6264bd1b5d4d447a9a06457c3929f7507253",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29053",
       "triggerID" : "3f0e6264bd1b5d4d447a9a06457c3929f7507253",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 6699bd3e79897191a2e275cf0d78cb6c320edd95 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28959) 
   * 3f0e6264bd1b5d4d447a9a06457c3929f7507253 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29053) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18266:
URL: https://github.com/apache/flink/pull/18266#issuecomment-1005342823


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28952",
       "triggerID" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6699bd3e79897191a2e275cf0d78cb6c320edd95",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28959",
       "triggerID" : "6699bd3e79897191a2e275cf0d78cb6c320edd95",
       "triggerType" : "PUSH"
     }, {
       "hash" : "3f0e6264bd1b5d4d447a9a06457c3929f7507253",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "3f0e6264bd1b5d4d447a9a06457c3929f7507253",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 6699bd3e79897191a2e275cf0d78cb6c320edd95 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28959) 
   * 3f0e6264bd1b5d4d447a9a06457c3929f7507253 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18266:
URL: https://github.com/apache/flink/pull/18266#issuecomment-1005342823


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28952",
       "triggerID" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6699bd3e79897191a2e275cf0d78cb6c320edd95",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28959",
       "triggerID" : "6699bd3e79897191a2e275cf0d78cb6c320edd95",
       "triggerType" : "PUSH"
     }, {
       "hash" : "3f0e6264bd1b5d4d447a9a06457c3929f7507253",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29053",
       "triggerID" : "3f0e6264bd1b5d4d447a9a06457c3929f7507253",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 3f0e6264bd1b5d4d447a9a06457c3929f7507253 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29053) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18266:
URL: https://github.com/apache/flink/pull/18266#issuecomment-1005342823


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28952",
       "triggerID" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28952) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #18266:
URL: https://github.com/apache/flink/pull/18266#issuecomment-1005342823


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18266:
URL: https://github.com/apache/flink/pull/18266#issuecomment-1005342823


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28952",
       "triggerID" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6699bd3e79897191a2e275cf0d78cb6c320edd95",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "6699bd3e79897191a2e275cf0d78cb6c320edd95",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28952) 
   * 6699bd3e79897191a2e275cf0d78cb6c320edd95 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] PatrickRen commented on a change in pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
PatrickRen commented on a change in pull request #18266:
URL: https://github.com/apache/flink/pull/18266#discussion_r779358249



##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReaderTest.java
##########
@@ -248,6 +253,100 @@ public void testAssignEmptySplit() throws Exception {
         assertTrue(recordsWithSplitIds.finishedSplits().isEmpty());
     }
 
+    @Test
+    public void testUsingCommittedOffsetsWithNoneOffsetResetStrategy() {
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG, "using-committed-offset-with-none-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(props, UnregisteredMetricsGroup.createSourceReaderMetricGroup());
+        // Add a committed offset split and catch kafka exception
+        final KafkaException undefinedOffsetException =
+                Assertions.assertThrows(
+                        KafkaException.class,
+                        () ->
+                                reader.handleSplitsChanges(
+                                        new SplitsAddition<>(
+                                                Collections.singletonList(
+                                                        new KafkaPartitionSplit(
+                                                                new TopicPartition(TOPIC1, 0),
+                                                                KafkaPartitionSplit
+                                                                        .COMMITTED_OFFSET)))));
+        MatcherAssert.assertThat(
+                undefinedOffsetException.getMessage(),
+                CoreMatchers.containsString("Undefined offset with no reset policy for partition"));
+    }
+
+    @Test
+    public void testUsingCommittedOffsetsWithEarliestOffsetResetStrategy() throws Throwable {
+        MetricListener metricListener = new MetricListener();
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
+                OffsetResetStrategy.EARLIEST.name().toLowerCase());
+        props.setProperty(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "1");
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG,
+                "using-committed-offset-with-earliest-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(
+                        props,
+                        InternalSourceReaderMetricGroup.mock(metricListener.getMetricGroup()));
+        // Add a committed offset split
+        reader.handleSplitsChanges(
+                new SplitsAddition<>(
+                        Collections.singletonList(
+                                new KafkaPartitionSplit(
+                                        new TopicPartition(TOPIC1, 0),
+                                        KafkaPartitionSplit.COMMITTED_OFFSET))));
+        // pendingRecords should have not been registered because of lazily registration
+        assertFalse(metricListener.getGauge(MetricNames.PENDING_RECORDS).isPresent());
+        // Trigger first fetch
+        reader.fetch();
+        final Optional<Gauge<Long>> pendingRecords =
+                metricListener.getGauge(MetricNames.PENDING_RECORDS);
+        assertTrue(pendingRecords.isPresent());
+        // Validate pendingRecords
+        assertNotNull(pendingRecords);
+        assertEquals(NUM_RECORDS_PER_PARTITION - 1, (long) pendingRecords.get().getValue());
+        for (int i = 1; i < NUM_RECORDS_PER_PARTITION; i++) {
+            reader.fetch();
+            assertEquals(NUM_RECORDS_PER_PARTITION - i - 1, (long) pendingRecords.get().getValue());
+        }
+    }
+
+    @Test
+    public void testUsingCommittedOffsetsWithLatestOffsetResetStrategy() throws Throwable {
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
+                OffsetResetStrategy.LATEST.name().toLowerCase());
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG, "using-committed-offset-with-latest-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(props, UnregisteredMetricsGroup.createSourceReaderMetricGroup());
+        // Add empty latest offset reset split
+        final KafkaPartitionSplit latestOffsetResetEmptySplit =
+                new KafkaPartitionSplit(
+                        new TopicPartition(TOPIC1, 0),
+                        KafkaPartitionSplit.COMMITTED_OFFSET,
+                        KafkaPartitionSplit.LATEST_OFFSET);
+        final KafkaPartitionSplit latestOffsetResetNormalSplit =
+                new KafkaPartitionSplit(
+                        new TopicPartition(TOPIC2, 0), KafkaPartitionSplit.COMMITTED_OFFSET);
+
+        reader.handleSplitsChanges(
+                new SplitsAddition<>(
+                        Arrays.asList(latestOffsetResetEmptySplit, latestOffsetResetNormalSplit)));
+
+        // Fetch and check latest offset reset split is added to finished splits
+        RecordsWithSplitIds<ConsumerRecord<byte[], byte[]>> recordsWithSplitIds = reader.fetch();
+        assertTrue(
+                recordsWithSplitIds
+                        .finishedSplits()
+                        .contains(latestOffsetResetEmptySplit.splitId()));

Review comment:
       One possible solution: 
   - Add a package-visible helper function to expose the consumer in `KafkaPartitionSplitReader`:
   ```java
   @VisibleForTesting
   KafkaConsumer<byte[], byte[]> consumer() {
       return consumer;
   }
   ```
   - Use `consumer.position()` in the test case to check the consuming offset before any fetch(), which is the actual starting offset of the reader.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] PatrickRen commented on a change in pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
PatrickRen commented on a change in pull request #18266:
URL: https://github.com/apache/flink/pull/18266#discussion_r779337802



##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReaderTest.java
##########
@@ -248,6 +253,100 @@ public void testAssignEmptySplit() throws Exception {
         assertTrue(recordsWithSplitIds.finishedSplits().isEmpty());
     }
 
+    @Test
+    public void testUsingCommittedOffsetsWithNoneOffsetResetStrategy() {
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG, "using-committed-offset-with-none-offset-reset");

Review comment:
       Maybe we can add a line of comment here, describing that you are using a new group ID without any committed offset, so an exception is expected

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReaderTest.java
##########
@@ -248,6 +253,100 @@ public void testAssignEmptySplit() throws Exception {
         assertTrue(recordsWithSplitIds.finishedSplits().isEmpty());
     }
 
+    @Test
+    public void testUsingCommittedOffsetsWithNoneOffsetResetStrategy() {
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG, "using-committed-offset-with-none-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(props, UnregisteredMetricsGroup.createSourceReaderMetricGroup());
+        // Add a committed offset split and catch kafka exception
+        final KafkaException undefinedOffsetException =
+                Assertions.assertThrows(
+                        KafkaException.class,
+                        () ->
+                                reader.handleSplitsChanges(
+                                        new SplitsAddition<>(
+                                                Collections.singletonList(
+                                                        new KafkaPartitionSplit(
+                                                                new TopicPartition(TOPIC1, 0),
+                                                                KafkaPartitionSplit
+                                                                        .COMMITTED_OFFSET)))));
+        MatcherAssert.assertThat(
+                undefinedOffsetException.getMessage(),
+                CoreMatchers.containsString("Undefined offset with no reset policy for partition"));
+    }
+
+    @Test
+    public void testUsingCommittedOffsetsWithEarliestOffsetResetStrategy() throws Throwable {
+        MetricListener metricListener = new MetricListener();
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
+                OffsetResetStrategy.EARLIEST.name().toLowerCase());
+        props.setProperty(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "1");
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG,
+                "using-committed-offset-with-earliest-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(
+                        props,
+                        InternalSourceReaderMetricGroup.mock(metricListener.getMetricGroup()));
+        // Add a committed offset split
+        reader.handleSplitsChanges(
+                new SplitsAddition<>(
+                        Collections.singletonList(
+                                new KafkaPartitionSplit(
+                                        new TopicPartition(TOPIC1, 0),
+                                        KafkaPartitionSplit.COMMITTED_OFFSET))));
+        // pendingRecords should have not been registered because of lazily registration
+        assertFalse(metricListener.getGauge(MetricNames.PENDING_RECORDS).isPresent());
+        // Trigger first fetch
+        reader.fetch();
+        final Optional<Gauge<Long>> pendingRecords =
+                metricListener.getGauge(MetricNames.PENDING_RECORDS);
+        assertTrue(pendingRecords.isPresent());
+        // Validate pendingRecords
+        assertNotNull(pendingRecords);
+        assertEquals(NUM_RECORDS_PER_PARTITION - 1, (long) pendingRecords.get().getValue());
+        for (int i = 1; i < NUM_RECORDS_PER_PARTITION; i++) {
+            reader.fetch();
+            assertEquals(NUM_RECORDS_PER_PARTITION - i - 1, (long) pendingRecords.get().getValue());

Review comment:
       This reused code snippet actually is for testing the functionality of metric `pendingRecords`. I think a more straight-forward way under this case is to check if the first record that reader fetches is at the earliest offset, or check if the consuming position of the partition before any `fetch()` is the earliest offset. 

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReaderTest.java
##########
@@ -248,6 +253,100 @@ public void testAssignEmptySplit() throws Exception {
         assertTrue(recordsWithSplitIds.finishedSplits().isEmpty());
     }
 
+    @Test
+    public void testUsingCommittedOffsetsWithNoneOffsetResetStrategy() {
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG, "using-committed-offset-with-none-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(props, UnregisteredMetricsGroup.createSourceReaderMetricGroup());
+        // Add a committed offset split and catch kafka exception
+        final KafkaException undefinedOffsetException =
+                Assertions.assertThrows(
+                        KafkaException.class,
+                        () ->
+                                reader.handleSplitsChanges(
+                                        new SplitsAddition<>(
+                                                Collections.singletonList(
+                                                        new KafkaPartitionSplit(
+                                                                new TopicPartition(TOPIC1, 0),
+                                                                KafkaPartitionSplit
+                                                                        .COMMITTED_OFFSET)))));
+        MatcherAssert.assertThat(
+                undefinedOffsetException.getMessage(),
+                CoreMatchers.containsString("Undefined offset with no reset policy for partition"));
+    }
+
+    @Test
+    public void testUsingCommittedOffsetsWithEarliestOffsetResetStrategy() throws Throwable {
+        MetricListener metricListener = new MetricListener();
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
+                OffsetResetStrategy.EARLIEST.name().toLowerCase());
+        props.setProperty(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "1");
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG,
+                "using-committed-offset-with-earliest-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(
+                        props,
+                        InternalSourceReaderMetricGroup.mock(metricListener.getMetricGroup()));
+        // Add a committed offset split
+        reader.handleSplitsChanges(
+                new SplitsAddition<>(
+                        Collections.singletonList(
+                                new KafkaPartitionSplit(
+                                        new TopicPartition(TOPIC1, 0),
+                                        KafkaPartitionSplit.COMMITTED_OFFSET))));
+        // pendingRecords should have not been registered because of lazily registration
+        assertFalse(metricListener.getGauge(MetricNames.PENDING_RECORDS).isPresent());
+        // Trigger first fetch
+        reader.fetch();
+        final Optional<Gauge<Long>> pendingRecords =
+                metricListener.getGauge(MetricNames.PENDING_RECORDS);
+        assertTrue(pendingRecords.isPresent());
+        // Validate pendingRecords
+        assertNotNull(pendingRecords);
+        assertEquals(NUM_RECORDS_PER_PARTITION - 1, (long) pendingRecords.get().getValue());
+        for (int i = 1; i < NUM_RECORDS_PER_PARTITION; i++) {
+            reader.fetch();
+            assertEquals(NUM_RECORDS_PER_PARTITION - i - 1, (long) pendingRecords.get().getValue());
+        }
+    }
+
+    @Test
+    public void testUsingCommittedOffsetsWithLatestOffsetResetStrategy() throws Throwable {
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
+                OffsetResetStrategy.LATEST.name().toLowerCase());
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG, "using-committed-offset-with-latest-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(props, UnregisteredMetricsGroup.createSourceReaderMetricGroup());
+        // Add empty latest offset reset split
+        final KafkaPartitionSplit latestOffsetResetEmptySplit =
+                new KafkaPartitionSplit(
+                        new TopicPartition(TOPIC1, 0),
+                        KafkaPartitionSplit.COMMITTED_OFFSET,
+                        KafkaPartitionSplit.LATEST_OFFSET);
+        final KafkaPartitionSplit latestOffsetResetNormalSplit =
+                new KafkaPartitionSplit(
+                        new TopicPartition(TOPIC2, 0), KafkaPartitionSplit.COMMITTED_OFFSET);
+
+        reader.handleSplitsChanges(
+                new SplitsAddition<>(
+                        Arrays.asList(latestOffsetResetEmptySplit, latestOffsetResetNormalSplit)));
+
+        // Fetch and check latest offset reset split is added to finished splits
+        RecordsWithSplitIds<ConsumerRecord<byte[], byte[]>> recordsWithSplitIds = reader.fetch();
+        assertTrue(
+                recordsWithSplitIds
+                        .finishedSplits()
+                        .contains(latestOffsetResetEmptySplit.splitId()));

Review comment:
       I doubt that this case is testing the expected behavior. If the reader doesn't act as excepted, for example the fetcher doesn't respect the `auto.offset.reset` config and starts from the earliest offset, the first split (latestOffsetResetEmptySplit) could also reach the end offset in `reader.fetch()`, be added to finished splits, and pass this case. Also I can't see any purpose of the second split (latestOffsetResetNormalSplit) here.
   
   Similar to the case above, I think a correct way is to check if the position before any `fetch()` is at the latest offset. This is the actual expected behavior of the reader.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18266:
URL: https://github.com/apache/flink/pull/18266#issuecomment-1005342823


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28952",
       "triggerID" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28952) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] ashulin commented on a change in pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
ashulin commented on a change in pull request #18266:
URL: https://github.com/apache/flink/pull/18266#discussion_r779398718



##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReaderTest.java
##########
@@ -248,6 +253,100 @@ public void testAssignEmptySplit() throws Exception {
         assertTrue(recordsWithSplitIds.finishedSplits().isEmpty());
     }
 
+    @Test
+    public void testUsingCommittedOffsetsWithNoneOffsetResetStrategy() {
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG, "using-committed-offset-with-none-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(props, UnregisteredMetricsGroup.createSourceReaderMetricGroup());
+        // Add a committed offset split and catch kafka exception
+        final KafkaException undefinedOffsetException =
+                Assertions.assertThrows(
+                        KafkaException.class,
+                        () ->
+                                reader.handleSplitsChanges(
+                                        new SplitsAddition<>(
+                                                Collections.singletonList(
+                                                        new KafkaPartitionSplit(
+                                                                new TopicPartition(TOPIC1, 0),
+                                                                KafkaPartitionSplit
+                                                                        .COMMITTED_OFFSET)))));
+        MatcherAssert.assertThat(
+                undefinedOffsetException.getMessage(),
+                CoreMatchers.containsString("Undefined offset with no reset policy for partition"));
+    }
+
+    @Test
+    public void testUsingCommittedOffsetsWithEarliestOffsetResetStrategy() throws Throwable {
+        MetricListener metricListener = new MetricListener();
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
+                OffsetResetStrategy.EARLIEST.name().toLowerCase());
+        props.setProperty(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "1");
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG,
+                "using-committed-offset-with-earliest-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(
+                        props,
+                        InternalSourceReaderMetricGroup.mock(metricListener.getMetricGroup()));
+        // Add a committed offset split
+        reader.handleSplitsChanges(
+                new SplitsAddition<>(
+                        Collections.singletonList(
+                                new KafkaPartitionSplit(
+                                        new TopicPartition(TOPIC1, 0),
+                                        KafkaPartitionSplit.COMMITTED_OFFSET))));
+        // pendingRecords should have not been registered because of lazily registration
+        assertFalse(metricListener.getGauge(MetricNames.PENDING_RECORDS).isPresent());
+        // Trigger first fetch
+        reader.fetch();
+        final Optional<Gauge<Long>> pendingRecords =
+                metricListener.getGauge(MetricNames.PENDING_RECORDS);
+        assertTrue(pendingRecords.isPresent());
+        // Validate pendingRecords
+        assertNotNull(pendingRecords);
+        assertEquals(NUM_RECORDS_PER_PARTITION - 1, (long) pendingRecords.get().getValue());
+        for (int i = 1; i < NUM_RECORDS_PER_PARTITION; i++) {
+            reader.fetch();
+            assertEquals(NUM_RECORDS_PER_PARTITION - i - 1, (long) pendingRecords.get().getValue());
+        }
+    }
+
+    @Test
+    public void testUsingCommittedOffsetsWithLatestOffsetResetStrategy() throws Throwable {
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
+                OffsetResetStrategy.LATEST.name().toLowerCase());
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG, "using-committed-offset-with-latest-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(props, UnregisteredMetricsGroup.createSourceReaderMetricGroup());
+        // Add empty latest offset reset split
+        final KafkaPartitionSplit latestOffsetResetEmptySplit =
+                new KafkaPartitionSplit(
+                        new TopicPartition(TOPIC1, 0),
+                        KafkaPartitionSplit.COMMITTED_OFFSET,
+                        KafkaPartitionSplit.LATEST_OFFSET);
+        final KafkaPartitionSplit latestOffsetResetNormalSplit =
+                new KafkaPartitionSplit(
+                        new TopicPartition(TOPIC2, 0), KafkaPartitionSplit.COMMITTED_OFFSET);
+
+        reader.handleSplitsChanges(
+                new SplitsAddition<>(
+                        Arrays.asList(latestOffsetResetEmptySplit, latestOffsetResetNormalSplit)));
+
+        // Fetch and check latest offset reset split is added to finished splits
+        RecordsWithSplitIds<ConsumerRecord<byte[], byte[]>> recordsWithSplitIds = reader.fetch();
+        assertTrue(
+                recordsWithSplitIds
+                        .finishedSplits()
+                        .contains(latestOffsetResetEmptySplit.splitId()));

Review comment:
       Thank you for your code review, I will fix soon.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18266:
URL: https://github.com/apache/flink/pull/18266#issuecomment-1005342823


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28952",
       "triggerID" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6699bd3e79897191a2e275cf0d78cb6c320edd95",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28959",
       "triggerID" : "6699bd3e79897191a2e275cf0d78cb6c320edd95",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 6699bd3e79897191a2e275cf0d78cb6c320edd95 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28959) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] ashulin commented on a change in pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
ashulin commented on a change in pull request #18266:
URL: https://github.com/apache/flink/pull/18266#discussion_r779354166



##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReaderTest.java
##########
@@ -248,6 +253,100 @@ public void testAssignEmptySplit() throws Exception {
         assertTrue(recordsWithSplitIds.finishedSplits().isEmpty());
     }
 
+    @Test
+    public void testUsingCommittedOffsetsWithNoneOffsetResetStrategy() {
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG, "using-committed-offset-with-none-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(props, UnregisteredMetricsGroup.createSourceReaderMetricGroup());
+        // Add a committed offset split and catch kafka exception
+        final KafkaException undefinedOffsetException =
+                Assertions.assertThrows(
+                        KafkaException.class,
+                        () ->
+                                reader.handleSplitsChanges(
+                                        new SplitsAddition<>(
+                                                Collections.singletonList(
+                                                        new KafkaPartitionSplit(
+                                                                new TopicPartition(TOPIC1, 0),
+                                                                KafkaPartitionSplit
+                                                                        .COMMITTED_OFFSET)))));
+        MatcherAssert.assertThat(
+                undefinedOffsetException.getMessage(),
+                CoreMatchers.containsString("Undefined offset with no reset policy for partition"));
+    }
+
+    @Test
+    public void testUsingCommittedOffsetsWithEarliestOffsetResetStrategy() throws Throwable {
+        MetricListener metricListener = new MetricListener();
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
+                OffsetResetStrategy.EARLIEST.name().toLowerCase());
+        props.setProperty(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "1");
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG,
+                "using-committed-offset-with-earliest-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(
+                        props,
+                        InternalSourceReaderMetricGroup.mock(metricListener.getMetricGroup()));
+        // Add a committed offset split
+        reader.handleSplitsChanges(
+                new SplitsAddition<>(
+                        Collections.singletonList(
+                                new KafkaPartitionSplit(
+                                        new TopicPartition(TOPIC1, 0),
+                                        KafkaPartitionSplit.COMMITTED_OFFSET))));
+        // pendingRecords should have not been registered because of lazily registration
+        assertFalse(metricListener.getGauge(MetricNames.PENDING_RECORDS).isPresent());
+        // Trigger first fetch
+        reader.fetch();
+        final Optional<Gauge<Long>> pendingRecords =
+                metricListener.getGauge(MetricNames.PENDING_RECORDS);
+        assertTrue(pendingRecords.isPresent());
+        // Validate pendingRecords
+        assertNotNull(pendingRecords);
+        assertEquals(NUM_RECORDS_PER_PARTITION - 1, (long) pendingRecords.get().getValue());
+        for (int i = 1; i < NUM_RECORDS_PER_PARTITION; i++) {
+            reader.fetch();
+            assertEquals(NUM_RECORDS_PER_PARTITION - i - 1, (long) pendingRecords.get().getValue());
+        }
+    }
+
+    @Test
+    public void testUsingCommittedOffsetsWithLatestOffsetResetStrategy() throws Throwable {
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
+                OffsetResetStrategy.LATEST.name().toLowerCase());
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG, "using-committed-offset-with-latest-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(props, UnregisteredMetricsGroup.createSourceReaderMetricGroup());
+        // Add empty latest offset reset split
+        final KafkaPartitionSplit latestOffsetResetEmptySplit =
+                new KafkaPartitionSplit(
+                        new TopicPartition(TOPIC1, 0),
+                        KafkaPartitionSplit.COMMITTED_OFFSET,
+                        KafkaPartitionSplit.LATEST_OFFSET);
+        final KafkaPartitionSplit latestOffsetResetNormalSplit =
+                new KafkaPartitionSplit(
+                        new TopicPartition(TOPIC2, 0), KafkaPartitionSplit.COMMITTED_OFFSET);
+
+        reader.handleSplitsChanges(
+                new SplitsAddition<>(
+                        Arrays.asList(latestOffsetResetEmptySplit, latestOffsetResetNormalSplit)));
+
+        // Fetch and check latest offset reset split is added to finished splits
+        RecordsWithSplitIds<ConsumerRecord<byte[], byte[]>> recordsWithSplitIds = reader.fetch();
+        assertTrue(
+                recordsWithSplitIds
+                        .finishedSplits()
+                        .contains(latestOffsetResetEmptySplit.splitId()));

Review comment:
       `latestOffsetResetEmptySplit` will be added to the Finished split, but if there is no unfinished split `consumer.poll()` in the `reader.fetch()`  will throw an exception.
   I also didn’t find out how to get the finished split without using `reader.fetch()` , so the test is done like this




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18266:
URL: https://github.com/apache/flink/pull/18266#issuecomment-1005342823


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28952",
       "triggerID" : "4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "6699bd3e79897191a2e275cf0d78cb6c320edd95",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28959",
       "triggerID" : "6699bd3e79897191a2e275cf0d78cb6c320edd95",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28952) 
   * 6699bd3e79897191a2e275cf0d78cb6c320edd95 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28959) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] ashulin commented on a change in pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
ashulin commented on a change in pull request #18266:
URL: https://github.com/apache/flink/pull/18266#discussion_r779366307



##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReaderTest.java
##########
@@ -248,6 +253,100 @@ public void testAssignEmptySplit() throws Exception {
         assertTrue(recordsWithSplitIds.finishedSplits().isEmpty());
     }
 
+    @Test
+    public void testUsingCommittedOffsetsWithNoneOffsetResetStrategy() {
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG, "using-committed-offset-with-none-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(props, UnregisteredMetricsGroup.createSourceReaderMetricGroup());
+        // Add a committed offset split and catch kafka exception
+        final KafkaException undefinedOffsetException =
+                Assertions.assertThrows(
+                        KafkaException.class,
+                        () ->
+                                reader.handleSplitsChanges(
+                                        new SplitsAddition<>(
+                                                Collections.singletonList(
+                                                        new KafkaPartitionSplit(
+                                                                new TopicPartition(TOPIC1, 0),
+                                                                KafkaPartitionSplit
+                                                                        .COMMITTED_OFFSET)))));
+        MatcherAssert.assertThat(
+                undefinedOffsetException.getMessage(),
+                CoreMatchers.containsString("Undefined offset with no reset policy for partition"));
+    }
+
+    @Test
+    public void testUsingCommittedOffsetsWithEarliestOffsetResetStrategy() throws Throwable {
+        MetricListener metricListener = new MetricListener();
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
+                OffsetResetStrategy.EARLIEST.name().toLowerCase());
+        props.setProperty(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "1");
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG,
+                "using-committed-offset-with-earliest-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(
+                        props,
+                        InternalSourceReaderMetricGroup.mock(metricListener.getMetricGroup()));
+        // Add a committed offset split
+        reader.handleSplitsChanges(
+                new SplitsAddition<>(
+                        Collections.singletonList(
+                                new KafkaPartitionSplit(
+                                        new TopicPartition(TOPIC1, 0),
+                                        KafkaPartitionSplit.COMMITTED_OFFSET))));
+        // pendingRecords should have not been registered because of lazily registration
+        assertFalse(metricListener.getGauge(MetricNames.PENDING_RECORDS).isPresent());
+        // Trigger first fetch
+        reader.fetch();
+        final Optional<Gauge<Long>> pendingRecords =
+                metricListener.getGauge(MetricNames.PENDING_RECORDS);
+        assertTrue(pendingRecords.isPresent());
+        // Validate pendingRecords
+        assertNotNull(pendingRecords);
+        assertEquals(NUM_RECORDS_PER_PARTITION - 1, (long) pendingRecords.get().getValue());
+        for (int i = 1; i < NUM_RECORDS_PER_PARTITION; i++) {
+            reader.fetch();
+            assertEquals(NUM_RECORDS_PER_PARTITION - i - 1, (long) pendingRecords.get().getValue());
+        }
+    }
+
+    @Test
+    public void testUsingCommittedOffsetsWithLatestOffsetResetStrategy() throws Throwable {
+        final Properties props = new Properties();
+        props.setProperty(
+                ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,
+                OffsetResetStrategy.LATEST.name().toLowerCase());
+        props.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG, "using-committed-offset-with-latest-offset-reset");
+        KafkaPartitionSplitReader reader =
+                createReader(props, UnregisteredMetricsGroup.createSourceReaderMetricGroup());
+        // Add empty latest offset reset split
+        final KafkaPartitionSplit latestOffsetResetEmptySplit =
+                new KafkaPartitionSplit(
+                        new TopicPartition(TOPIC1, 0),
+                        KafkaPartitionSplit.COMMITTED_OFFSET,
+                        KafkaPartitionSplit.LATEST_OFFSET);
+        final KafkaPartitionSplit latestOffsetResetNormalSplit =
+                new KafkaPartitionSplit(
+                        new TopicPartition(TOPIC2, 0), KafkaPartitionSplit.COMMITTED_OFFSET);
+
+        reader.handleSplitsChanges(
+                new SplitsAddition<>(
+                        Arrays.asList(latestOffsetResetEmptySplit, latestOffsetResetNormalSplit)));
+
+        // Fetch and check latest offset reset split is added to finished splits
+        RecordsWithSplitIds<ConsumerRecord<byte[], byte[]>> recordsWithSplitIds = reader.fetch();
+        assertTrue(
+                recordsWithSplitIds
+                        .finishedSplits()
+                        .contains(latestOffsetResetEmptySplit.splitId()));

Review comment:
       For the finished split, the reader will unassign the partition, and for the Kafka consumer, it is not allowed to get the offset of the unassign partition.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #18266:
URL: https://github.com/apache/flink/pull/18266#issuecomment-1005344350


   Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress of the review.
   
   
   ## Automated Checks
   Last check on commit 4a532eabe4e48f4e4e3a7920a75c6a731dac1a0a (Wed Jan 05 03:09:39 UTC 2022)
   
   **Warnings:**
    * No documentation files were touched! Remember to keep the Flink docs up to date!
   
   
   <sub>Mention the bot in a comment to re-run the automated checks.</sub>
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process.<details>
    The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`)
    - `@flinkbot approve all` to approve all aspects
    - `@flinkbot approve-until architecture` to approve everything until `architecture`
    - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention
    - `@flinkbot disapprove architecture` to remove an approval you gave earlier
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] fapaul merged pull request #18266: [FLINK-25510][Connectors / Kafka][tests] Add using committed offsets test cases for KafkaPartitionSplitReader

Posted by GitBox <gi...@apache.org>.
fapaul merged pull request #18266:
URL: https://github.com/apache/flink/pull/18266


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org