You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2022/01/25 06:52:58 UTC

[GitHub] [flink] ruanhang1993 opened a new pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

ruanhang1993 opened a new pull request #18496:
URL: https://github.com/apache/flink/pull/18496


   ## What is the purpose of the change
   
   This pull request adds data stream sink test suite in the connector testframe.
   
   ## Brief change log
   
     - add data stream sink test suite in the connector testframe
     - add sink tests for the Kafka connector
    
   ## Verifying this change
   
   This change added tests in the connector testframe.
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): no
     - The public API, i.e., is any changed class annotated with `@Public(Evolving)`:  no
     - The serializers: no
     - The runtime per-record code paths (performance sensitive): no 
     - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
     - The S3 file system connector: no 
   
   ## Documentation
   
     - Does this pull request introduce a new feature? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466) 
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "48dd16592a335d4298e0aa08b9bb89d4cc72994d",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "48dd16592a335d4298e0aa08b9bb89d4cc72994d",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466) 
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   * 48dd16592a335d4298e0aa08b9bb89d4cc72994d UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] ruanhang1993 commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
ruanhang1993 commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r794258656



##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,527 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.ListSource;
+import org.apache.flink.connector.testframe.utils.MetricQueryer;
+import org.apache.flink.connector.testframe.utils.TestDataMatchers;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.TestDataMatchers.matchesMultipleSplitTestData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.appendResultData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.doubleEquals;
+import static org.apache.flink.connector.testframe.utils.TestUtils.timeoutAssert;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+    static ExecutorService executorService =
+            Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() * 2);
+
+    private final long jobExecuteTimeMs = 20000;
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testSink(

Review comment:
       I don't think we should disable checkpoint. Some sinks relies on the checkpoint to commit data. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] zentol commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r808770920



##########
File path: flink-test-utils-parent/flink-connector-test-utils/pom.xml
##########
@@ -95,4 +95,30 @@
 			<scope>compile</scope>
 		</dependency>
 	</dependencies>
+
+	<build>
+		<plugins>
+			<plugin>
+				<groupId>org.apache.maven.plugins</groupId>
+				<artifactId>maven-shade-plugin</artifactId>
+				<executions>
+					<execution>
+						<phase>package</phase>
+						<goals>
+							<goal>shade</goal>
+						</goals>
+						<configuration>
+							<shadedArtifactAttached>true</shadedArtifactAttached>
+							<shadedClassifierName>source</shadedClassifierName>
+							<artifactSet>
+								<includes>
+									<include>**/connector/testframe/source/**</include>

Review comment:
       Thank you!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466) 
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] PatrickRen commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
PatrickRen commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r806652616



##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSource.java
##########
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.Source;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.api.connector.source.SplitEnumeratorContext;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumState;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumStateSerializer;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumerator;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplitSerializer;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+import org.apache.flink.util.Preconditions;
+
+import java.util.List;
+
+/**
+ * A {@link Source} implementation that reads data from a list and stops reading at the fixed
+ * position. The source will wait until the checkpoint or savepoint triggered, the source is useful
+ * for connector tests.
+ *
+ * <p>Note: This parallelism of source must be 1.
+ */
+public class FromElementsSource<OUT> implements Source<OUT, FromElementsSplit, NoOpEnumState> {

Review comment:
       This source could be moved to `flink-streaming-java`, under the same module where `FromElementsFunction` exist. By moving this we can also get rid of the `flink-connector-testing.jar` created by the pom of `flink-end-to-end-tests-common-kafka`

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+
+        /**
+         * The job should stop after consume a specified number of records. In order to know when
+         * the specified number of records have been consumed, a collect sink is need to be watched.
+         */
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;
+        try {
+            final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());

Review comment:
       I still think it's weird to use MetricQuerier to get job details. Maybe a better way is to create a new RestClient here for getting job details instead of reusing the MetricQuerier. MetricQuerier should only be responsible for fetching metrics from cluster.

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromLimitedElementsSourceReader.java
##########
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.ReaderOutput;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.core.io.InputStatus;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.List;
+
+import static org.apache.flink.core.io.InputStatus.MORE_AVAILABLE;
+
+/**
+ * A {@link SourceReader} implementation that reads data from a list. This source reader will stop
+ * reading at the given position and wait until the checkpoint or savepoint triggered.
+ *
+ * <p>This source reader is used when {@link FromElementsSource} creates readers with a fixed
+ * position.
+ */
+public class FromLimitedElementsSourceReader<T> extends FromElementsSourceReader<T> {

Review comment:
       This class could be merged into `FromElementsSourceReader` with taking `limitedNum` as an extra parameter




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466) 
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   * c1619577228a3fde9684f2c85965d6d1f76addbf UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518",
       "triggerID" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   * c1619577228a3fde9684f2c85965d6d1f76addbf Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518",
       "triggerID" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31529",
       "triggerID" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "ebca9a1e955205c53ea919b863c9550642bc73db",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "ebca9a1e955205c53ea919b863c9550642bc73db",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   * c1619577228a3fde9684f2c85965d6d1f76addbf Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518) 
   * 1076a64c9f916fe9d8a23d38aafbd1f359b038d9 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31529) 
   * ebca9a1e955205c53ea919b863c9550642bc73db UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518",
       "triggerID" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31529",
       "triggerID" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "ebca9a1e955205c53ea919b863c9550642bc73db",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31536",
       "triggerID" : "ebca9a1e955205c53ea919b863c9550642bc73db",
       "triggerType" : "PUSH"
     }, {
       "hash" : "cc23b8d007ad7df80d90db437789470502b78f53",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31576",
       "triggerID" : "cc23b8d007ad7df80d90db437789470502b78f53",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * ebca9a1e955205c53ea919b863c9550642bc73db Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31536) 
   * cc23b8d007ad7df80d90db437789470502b78f53 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31576) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518",
       "triggerID" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31529",
       "triggerID" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "ebca9a1e955205c53ea919b863c9550642bc73db",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31536",
       "triggerID" : "ebca9a1e955205c53ea919b863c9550642bc73db",
       "triggerType" : "PUSH"
     }, {
       "hash" : "cc23b8d007ad7df80d90db437789470502b78f53",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31576",
       "triggerID" : "cc23b8d007ad7df80d90db437789470502b78f53",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * cc23b8d007ad7df80d90db437789470502b78f53 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31576) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518",
       "triggerID" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   * c1619577228a3fde9684f2c85965d6d1f76addbf Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518) 
   * 1076a64c9f916fe9d8a23d38aafbd1f359b038d9 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] imaffe commented on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
imaffe commented on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1041169942


   Curious is this merged ? I saw the status is closed but not merged, did it get merged in somewhere else @leonardBang ? Thanks~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] leonardBang commented on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
leonardBang commented on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1041386797


   > Curious is this merged ? I saw the status is closed but not merged, did it get merged in somewhere else @leonardBang ? Thanks~
   
   Tips: The PR will be merged if you input 'This closes #PR_ID' in your commit message when you push one commit to the ASF project.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 909c155557a856976df8b5be1729553873ecbd4b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * e3a0766cb731672fd5be68b79bf380c8577ea068 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369) 
   * d12c135ebf7dcc56e9c26695ecc2a2c3f4853176 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] PatrickRen commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
PatrickRen commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r793233942



##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/utils/TestDataMatchers.java
##########
@@ -33,180 +36,194 @@
 
     // ----------------------------  Matcher Builders ----------------------------------
     public static <T> MultipleSplitDataMatcher<T> matchesMultipleSplitTestData(
-            List<List<T>> testRecordsLists) {
-        return new MultipleSplitDataMatcher<>(testRecordsLists);
+            List<List<T>> testRecordsLists, CheckpointingMode semantic) {
+        return new MultipleSplitDataMatcher<>(testRecordsLists, semantic);
     }
 
-    public static <T> SingleSplitDataMatcher<T> matchesSplitTestData(List<T> testData) {
-        return new SingleSplitDataMatcher<>(testData);
+    public static <T> MultipleSplitDataMatcher<T> matchesMultipleSplitTestData(
+            List<List<T>> testRecordsLists,
+            CheckpointingMode semantic,
+            boolean testDataAllInResult) {
+        return new MultipleSplitDataMatcher<>(
+                testRecordsLists, MultipleSplitDataMatcher.UNSET, semantic, testDataAllInResult);
     }
 
-    public static <T> SingleSplitDataMatcher<T> matchesSplitTestData(List<T> testData, int limit) {
-        return new SingleSplitDataMatcher<>(testData, limit);
+    public static <T> MultipleSplitDataMatcher<T> matchesMultipleSplitTestData(
+            List<List<T>> testRecordsLists, Integer limit, CheckpointingMode semantic) {
+        if (limit == null) {
+            return new MultipleSplitDataMatcher<>(testRecordsLists, semantic);
+        }
+        return new MultipleSplitDataMatcher<>(testRecordsLists, limit, semantic);
     }
 
     // ---------------------------- Matcher Definitions --------------------------------
-
     /**
-     * Matcher for validating test data in a single split.
+     * Matcher for validating test data from multiple splits.
+     *
+     * <p>Each list has a pointer (iterator) pointing to current checking record. When a record is
+     * received in the stream, it will be compared to all current pointing records in lists, and the
+     * pointer to the identical record will move forward.
+     *
+     * <p>If the stream preserves the correctness and order of records in all splits, all pointers
+     * should reach the end of the list finally.
      *
      * @param <T> Type of validating record
      */
-    public static class SingleSplitDataMatcher<T> extends TypeSafeDiagnosingMatcher<Iterator<T>> {
+    public static class MultipleSplitDataMatcher<T> extends Condition<Iterator<T>> {
+        private static final Logger LOG = LoggerFactory.getLogger(MultipleSplitDataMatcher.class);
+
         private static final int UNSET = -1;
 
-        private final List<T> testData;
-        private final int limit;
+        List<TestRecords<T>> testRecordsLists = new ArrayList<>();
 
+        private List<List<T>> testData;
         private String mismatchDescription = null;
+        private final int limit;
+        private final int testDataSize;
+        private final CheckpointingMode semantic;
+        private final boolean testDataAllInResult;
 
-        public SingleSplitDataMatcher(List<T> testData) {
-            this.testData = testData;
-            this.limit = UNSET;
+        public MultipleSplitDataMatcher(List<List<T>> testData, CheckpointingMode semantic) {
+            this(testData, UNSET, semantic);
         }
 
-        public SingleSplitDataMatcher(List<T> testData, int limit) {
-            if (limit > testData.size()) {
+        public MultipleSplitDataMatcher(
+                List<List<T>> testData, int limit, CheckpointingMode semantic) {
+            this(testData, limit, semantic, true);
+        }
+
+        public MultipleSplitDataMatcher(
+                List<List<T>> testData,
+                int limit,
+                CheckpointingMode semantic,
+                boolean testDataAllInResult) {
+            super();
+            int allSize = 0;
+            for (List<T> testRecordsList : testData) {
+                this.testRecordsLists.add(new TestRecords<>(testRecordsList));
+                allSize += testRecordsList.size();
+            }
+
+            if (limit > allSize) {
                 throw new IllegalArgumentException(
                         "Limit validation size should be less than number of test records");
             }
+            this.testDataAllInResult = testDataAllInResult;
             this.testData = testData;
+            this.semantic = semantic;
+            this.testDataSize = allSize;
             this.limit = limit;
         }
 
         @Override
-        protected boolean matchesSafely(Iterator<T> resultIterator, Description description) {
-            if (mismatchDescription != null) {
-                description.appendText(mismatchDescription);
-                return false;
+        public boolean matches(Iterator<T> resultIterator) {
+            if (CheckpointingMode.AT_LEAST_ONCE.equals(semantic)) {
+                return matchAtLeastOnce(resultIterator);
             }
+            return matchExactlyOnce(resultIterator);
+        }
 
-            boolean dataMismatch = false;
-            boolean sizeMismatch = false;
-            String sizeMismatchDescription = "";
-            String dataMismatchDescription = "";
+        protected boolean matchExactlyOnce(Iterator<T> resultIterator) {
             int recordCounter = 0;
-            for (T testRecord : testData) {
-                if (!resultIterator.hasNext()) {
-                    sizeMismatchDescription =
-                            String.format(
-                                    "Expected to have %d records in result, but only received %d records",
-                                    limit == UNSET ? testData.size() : limit, recordCounter);
-                    sizeMismatch = true;
-                    break;
-                }
-                T resultRecord = resultIterator.next();
-                if (!testRecord.equals(resultRecord)) {
-                    dataMismatchDescription =
-                            String.format(
-                                    "Mismatched record at position %d: Expected '%s' but was '%s'",
-                                    recordCounter, testRecord, resultRecord);
-                    dataMismatch = true;
+            while (resultIterator.hasNext()) {
+                final T record = resultIterator.next();
+                if (!matchThenNext(record)) {
+                    if (recordCounter >= testDataSize) {
+                        this.mismatchDescription =
+                                generateMismatchDescription(
+                                        String.format(
+                                                "Expected to have exactly %d records in result, but received more records",
+                                                testRecordsLists.stream()
+                                                        .mapToInt(list -> list.records.size())
+                                                        .sum()),
+                                        resultIterator);
+                    } else {
+                        this.mismatchDescription =
+                                generateMismatchDescription(
+                                        String.format(
+                                                "Unexpected record '%s' at position %d",
+                                                record, recordCounter),
+                                        resultIterator);
+                    }
+                    logError();

Review comment:
       Do we need to write error message to log? I think the error will be reflected in `mismatchDescription` and the exception thrown

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SourceTestSuiteBase.java
##########
@@ -86,6 +93,8 @@
 public abstract class SourceTestSuiteBase<T> {
 
     private static final Logger LOG = LoggerFactory.getLogger(SourceTestSuiteBase.class);
+    static ExecutorService executorService =

Review comment:
       Could we move this `ExecutorService` to a util class? Having this executor as a static member of test suite is kinda weird 🤔 

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/enumerator/MockEnumerator.java
##########
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source.enumerator;
+
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.connector.testframe.source.split.ListSplit;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.util.List;
+
+/** Mock enumerator. */
+public class MockEnumerator implements SplitEnumerator<ListSplit, MockEnumState> {

Review comment:
       What about `NoOpEnumerator`?

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,527 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.ListSource;
+import org.apache.flink.connector.testframe.utils.MetricQueryer;
+import org.apache.flink.connector.testframe.utils.TestDataMatchers;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.TestDataMatchers.matchesMultipleSplitTestData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.appendResultData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.doubleEquals;
+import static org.apache.flink.connector.testframe.utils.TestUtils.timeoutAssert;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+    static ExecutorService executorService =
+            Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() * 2);
+
+    private final long jobExecuteTimeMs = 20000;
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        execEnv.fromCollection(testRecords)
+                .name("sourceInSinkTest")
+                .setParallelism(1)
+                .returns(externalContext.getProducedType())
+                .sinkTo(tryCreateSink(externalContext, sinkSettings))
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(Duration.ofSeconds(30)));
+
+        // Check test result
+        List<T> target = sort(testRecords);

Review comment:
       Could we wrap this `sort()` into helper function `checkResult()`?

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,527 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.ListSource;
+import org.apache.flink.connector.testframe.utils.MetricQueryer;
+import org.apache.flink.connector.testframe.utils.TestDataMatchers;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.TestDataMatchers.matchesMultipleSplitTestData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.appendResultData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.doubleEquals;
+import static org.apache.flink.connector.testframe.utils.TestUtils.timeoutAssert;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+    static ExecutorService executorService =
+            Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() * 2);
+
+    private final long jobExecuteTimeMs = 20000;
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        execEnv.fromCollection(testRecords)
+                .name("sourceInSinkTest")
+                .setParallelism(1)
+                .returns(externalContext.getProducedType())
+                .sinkTo(tryCreateSink(externalContext, sinkSettings))
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(Duration.ofSeconds(30)));
+
+        // Check test result
+        List<T> target = sort(testRecords);
+        checkResult(externalContext.createSinkDataReader(sinkSettings), target, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new ListSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        source.returns(externalContext.getProducedType())
+                .sinkTo(tryCreateSink(externalContext, sinkSettings))
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+        CollectResultIterator<T> iterator = addCollectSink(source);

Review comment:
       It'll be more descriptive to have some comments here explaining why we need collect sink here

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,527 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.ListSource;
+import org.apache.flink.connector.testframe.utils.MetricQueryer;
+import org.apache.flink.connector.testframe.utils.TestDataMatchers;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.TestDataMatchers.matchesMultipleSplitTestData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.appendResultData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.doubleEquals;
+import static org.apache.flink.connector.testframe.utils.TestUtils.timeoutAssert;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+    static ExecutorService executorService =
+            Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() * 2);
+
+    private final long jobExecuteTimeMs = 20000;
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        execEnv.fromCollection(testRecords)
+                .name("sourceInSinkTest")
+                .setParallelism(1)
+                .returns(externalContext.getProducedType())
+                .sinkTo(tryCreateSink(externalContext, sinkSettings))
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(Duration.ofSeconds(30)));
+
+        // Check test result
+        List<T> target = sort(testRecords);
+        checkResult(externalContext.createSinkDataReader(sinkSettings), target, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new ListSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        source.returns(externalContext.getProducedType())
+                .sinkTo(tryCreateSink(externalContext, sinkSettings))
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;
+        try {
+            final MetricQueryer queryRestClient =

Review comment:
       Is there any specific reason to use a metric related util class in a savepoint case?

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/utils/TestUtils.java
##########
@@ -0,0 +1,140 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.utils;
+
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import java.io.File;
+import java.io.IOException;
+import java.math.BigDecimal;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.attribute.FileAttribute;
+import java.time.Duration;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/** Test utils. */
+public class TestUtils {

Review comment:
       It's better to have JavaDocs for these util methods

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/utils/MetricQueryer.java
##########
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.utils;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.runtime.rest.RestClient;
+import org.apache.flink.runtime.rest.messages.EmptyRequestBody;
+import org.apache.flink.runtime.rest.messages.JobIDPathParameter;
+import org.apache.flink.runtime.rest.messages.JobMessageParameters;
+import org.apache.flink.runtime.rest.messages.JobVertexIdPathParameter;
+import org.apache.flink.runtime.rest.messages.MessagePathParameter;
+import org.apache.flink.runtime.rest.messages.job.JobDetailsHeaders;
+import org.apache.flink.runtime.rest.messages.job.JobDetailsInfo;
+import org.apache.flink.runtime.rest.messages.job.metrics.AggregatedMetricsResponseBody;
+import org.apache.flink.runtime.rest.messages.job.metrics.AggregatedSubtaskMetricsHeaders;
+import org.apache.flink.runtime.rest.messages.job.metrics.AggregatedSubtaskMetricsParameters;
+import org.apache.flink.runtime.rest.messages.job.metrics.MetricsFilterParameter;
+import org.apache.flink.util.ConfigurationException;
+import org.apache.flink.util.StringUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Iterator;
+import java.util.concurrent.Executor;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/** The queryer is used to get job metrics by rest API. */
+public class MetricQueryer {
+    private static final Logger LOG = LoggerFactory.getLogger(MetricQueryer.class);
+    private RestClient restClient;
+
+    public MetricQueryer(Configuration configuration, Executor executor)
+            throws ConfigurationException {
+        restClient = new RestClient(configuration, executor);
+    }
+
+    public JobDetailsInfo getJobDetails(TestEnvironment.Endpoint endpoint, JobID jobId)

Review comment:
       Why **metric** queryer has job detail related logic?

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,527 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.ListSource;
+import org.apache.flink.connector.testframe.utils.MetricQueryer;
+import org.apache.flink.connector.testframe.utils.TestDataMatchers;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.TestDataMatchers.matchesMultipleSplitTestData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.appendResultData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.doubleEquals;
+import static org.apache.flink.connector.testframe.utils.TestUtils.timeoutAssert;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+    static ExecutorService executorService =
+            Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() * 2);
+
+    private final long jobExecuteTimeMs = 20000;
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testSink(

Review comment:
       The name of test is quite ambiguous. What about `testBasicSink`?
   
   Also I think we can make this case as simple as possible. What about disabling checkpoint and set sink's parallelism to 1?

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SourceTestSuiteBase.java
##########
@@ -400,6 +414,36 @@ protected JobClient submitJob(StreamExecutionEnvironment env, String jobName) th
                 stream.getExecutionEnvironment().getCheckpointConfig());
     }
 
+    /**
+     * Compare the test data with the result.
+     *
+     * <p>If the source is bounded, limit should be null.
+     *
+     * @param resultIterator the data read from the job
+     * @param testData the test data
+     * @param semantic the supported semantic, see {@link CheckpointingMode}
+     * @param limit expected number of the data to read from the job
+     */
+    private void checkResultBySemantic(
+            CloseableIterator<T> resultIterator,
+            List<List<T>> testData,
+            CheckpointingMode semantic,
+            Integer limit) {
+        if (limit != null) {
+            timeoutAssert(

Review comment:
       What about using `Assertions.assertThat(CompletableFuture).suceedsWithin(Duration)`? But this requires to change utils in `TestDataMatcher` using future-style

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/ListSource.java
##########
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.Source;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.api.connector.source.SplitEnumeratorContext;
+import org.apache.flink.connector.testframe.source.enumerator.MockEnumState;
+import org.apache.flink.connector.testframe.source.enumerator.MockEnumStateSerializer;
+import org.apache.flink.connector.testframe.source.enumerator.MockEnumerator;
+import org.apache.flink.connector.testframe.source.split.ListSplit;
+import org.apache.flink.connector.testframe.source.split.ListSplitSerializer;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+
+import java.util.List;
+
+/**
+ * The source reads data from a list and stops reading at the fixed position. The source will wait
+ * until the checkpoint or savepoint triggers.
+ *
+ * <p>Note that this source must be of parallelism 1.
+ */
+public class ListSource<OUT> implements Source<OUT, ListSplit, MockEnumState> {

Review comment:
       What about using `FromElementsSource` to align with `FromElementsFunction`? And we can create another PR for replacing `FromElementsFunction` with Source API in the future.

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,527 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.ListSource;
+import org.apache.flink.connector.testframe.utils.MetricQueryer;
+import org.apache.flink.connector.testframe.utils.TestDataMatchers;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.TestDataMatchers.matchesMultipleSplitTestData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.appendResultData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.doubleEquals;
+import static org.apache.flink.connector.testframe.utils.TestUtils.timeoutAssert;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {

Review comment:
       Is there any reason that T should be `Comparable`? 

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,527 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.ListSource;
+import org.apache.flink.connector.testframe.utils.MetricQueryer;
+import org.apache.flink.connector.testframe.utils.TestDataMatchers;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.TestDataMatchers.matchesMultipleSplitTestData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.appendResultData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.doubleEquals;
+import static org.apache.flink.connector.testframe.utils.TestUtils.timeoutAssert;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+    static ExecutorService executorService =
+            Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() * 2);
+
+    private final long jobExecuteTimeMs = 20000;

Review comment:
       Use `Duration` and define as `static`

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/utils/TestUtils.java
##########
@@ -0,0 +1,140 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.utils;
+
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import java.io.File;
+import java.io.IOException;
+import java.math.BigDecimal;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.attribute.FileAttribute;
+import java.time.Duration;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/** Test utils. */
+public class TestUtils {
+    public static File newFolder(Path path) throws IOException {
+        Path tempPath = Files.createTempDirectory(path, "testing-framework", new FileAttribute[0]);
+        return tempPath.toFile();
+    }
+
+    public static <T> List<T> appendResultData(
+            List<T> result,
+            ExternalSystemDataReader<T> reader,
+            List<T> expected,
+            int retryTimes,
+            CheckpointingMode semantic) {
+        long timeoutMs = 1000L;
+        int retryIndex = 0;
+        if (EXACTLY_ONCE.equals(semantic)) {
+            while (retryIndex++ < retryTimes && result.size() < expected.size()) {
+                result.addAll(reader.poll(Duration.ofMillis(timeoutMs)));
+            }
+            return result;
+        } else if (AT_LEAST_ONCE.equals(semantic)) {
+            while (retryIndex++ < retryTimes && !containSameVal(expected, result, semantic)) {
+                result.addAll(reader.poll(Duration.ofMillis(timeoutMs)));
+            }
+            return result;
+        }
+        throw new IllegalStateException(
+                String.format("%s delivery guarantee doesn't support test.", semantic.name()));
+    }
+
+    public static <T> boolean containSameVal(
+            List<T> expected, List<T> result, CheckpointingMode semantic) {
+        checkNotNull(expected);
+        checkNotNull(result);
+
+        Set<Integer> matchedIndex = new HashSet<>();
+        if (EXACTLY_ONCE.equals(semantic) && expected.size() != result.size()) {
+            return false;
+        }
+        for (T rowData0 : expected) {
+            int before = matchedIndex.size();
+            for (int i = 0; i < result.size(); i++) {
+                if (matchedIndex.contains(i)) {
+                    continue;
+                }
+                if (rowData0.equals(result.get(i))) {
+                    matchedIndex.add(i);
+                    break;
+                }
+            }
+            if (before == matchedIndex.size()) {
+                return false;
+            }
+        }
+        return true;
+    }
+
+    public static void timeoutAssert(
+            ExecutorService executorService, Runnable task, long time, TimeUnit timeUnit) {
+        Future future = executorService.submit(task);
+        try {
+            future.get(time, timeUnit);
+        } catch (InterruptedException e) {
+            throw new RuntimeException("Test failed to get the result.", e);
+        } catch (ExecutionException e) {
+            throw new RuntimeException("Test failed with some exception.", e);
+        } catch (TimeoutException e) {
+            throw new RuntimeException(
+                    String.format("Test timeout after %d %s.", time, timeUnit.name()), e);
+        } finally {
+            future.cancel(true);
+        }
+    }
+
+    public static void deletePath(Path path) throws IOException {

Review comment:
       Apache Commons has a helper function:
   
   ```java
   import org.apache.commons.io.FileUtils;
   FileUtils.deleteDirectory(File);
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,527 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.ListSource;
+import org.apache.flink.connector.testframe.utils.MetricQueryer;
+import org.apache.flink.connector.testframe.utils.TestDataMatchers;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.TestDataMatchers.matchesMultipleSplitTestData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.appendResultData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.doubleEquals;
+import static org.apache.flink.connector.testframe.utils.TestUtils.timeoutAssert;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+    static ExecutorService executorService =
+            Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() * 2);
+
+    private final long jobExecuteTimeMs = 20000;
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        execEnv.fromCollection(testRecords)
+                .name("sourceInSinkTest")
+                .setParallelism(1)
+                .returns(externalContext.getProducedType())
+                .sinkTo(tryCreateSink(externalContext, sinkSettings))
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(Duration.ofSeconds(30)));
+
+        // Check test result
+        List<T> target = sort(testRecords);
+        checkResult(externalContext.createSinkDataReader(sinkSettings), target, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new ListSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        source.returns(externalContext.getProducedType())
+                .sinkTo(tryCreateSink(externalContext, sinkSettings))
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;
+        try {
+            final MetricQueryer queryRestClient =
+                    new MetricQueryer(new Configuration(), executorService);
+            waitForAllTaskRunning(
+                    () ->
+                            queryRestClient.getJobDetails(
+                                    testEnv.getRestEndpoint(), jobClient.getJobID()),
+                    Deadline.fromNow(Duration.ofSeconds(30)));
+
+            timeoutAssert(
+                    executorService,
+                    () -> {
+                        int count = 0;
+                        while (count < numBeforeSuccess && iterator.hasNext()) {
+                            iterator.next();
+                            count++;
+                        }
+                        if (count < numBeforeSuccess) {
+                            throw new IllegalStateException(
+                                    String.format("Fail to get %d records.", numBeforeSuccess));
+                        }
+                    },
+                    30,
+                    TimeUnit.SECONDS);
+            savepointDir =
+                    jobClient
+                            .stopWithSavepoint(true, testEnv.getCheckpointUri())
+                            .get(30, TimeUnit.SECONDS);
+            waitForJobStatus(
+                    jobClient,
+                    Collections.singletonList(JobStatus.FINISHED),
+                    Deadline.fromNow(Duration.ofSeconds(30)));
+        } catch (Exception e) {
+            killJob(jobClient);
+            throw e;
+        }
+
+        List<T> target = sort(testRecords.subList(0, numBeforeSuccess));

Review comment:
       We can wrap `sort` into `checkResult`

##########
File path: flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/pom.xml
##########
@@ -224,6 +223,15 @@ under the License.
 							<type>jar</type>
 							<outputDirectory>${project.build.directory}/dependencies</outputDirectory>
 						</artifactItem>
+						<artifactItem>

Review comment:
       We can remove this snippet if we move `ListSource` (`FromElementsSource`) to `flink-streaming-java` 

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/utils/TestUtils.java
##########
@@ -0,0 +1,140 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.utils;
+
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import java.io.File;
+import java.io.IOException;
+import java.math.BigDecimal;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.attribute.FileAttribute;
+import java.time.Duration;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/** Test utils. */
+public class TestUtils {
+    public static File newFolder(Path path) throws IOException {
+        Path tempPath = Files.createTempDirectory(path, "testing-framework", new FileAttribute[0]);
+        return tempPath.toFile();
+    }
+
+    public static <T> List<T> appendResultData(
+            List<T> result,
+            ExternalSystemDataReader<T> reader,
+            List<T> expected,
+            int retryTimes,
+            CheckpointingMode semantic) {
+        long timeoutMs = 1000L;
+        int retryIndex = 0;
+        if (EXACTLY_ONCE.equals(semantic)) {
+            while (retryIndex++ < retryTimes && result.size() < expected.size()) {
+                result.addAll(reader.poll(Duration.ofMillis(timeoutMs)));
+            }
+            return result;
+        } else if (AT_LEAST_ONCE.equals(semantic)) {
+            while (retryIndex++ < retryTimes && !containSameVal(expected, result, semantic)) {
+                result.addAll(reader.poll(Duration.ofMillis(timeoutMs)));
+            }
+            return result;
+        }
+        throw new IllegalStateException(
+                String.format("%s delivery guarantee doesn't support test.", semantic.name()));
+    }
+
+    public static <T> boolean containSameVal(
+            List<T> expected, List<T> result, CheckpointingMode semantic) {
+        checkNotNull(expected);
+        checkNotNull(result);
+
+        Set<Integer> matchedIndex = new HashSet<>();
+        if (EXACTLY_ONCE.equals(semantic) && expected.size() != result.size()) {
+            return false;
+        }
+        for (T rowData0 : expected) {
+            int before = matchedIndex.size();
+            for (int i = 0; i < result.size(); i++) {
+                if (matchedIndex.contains(i)) {
+                    continue;
+                }
+                if (rowData0.equals(result.get(i))) {
+                    matchedIndex.add(i);
+                    break;
+                }
+            }
+            if (before == matchedIndex.size()) {
+                return false;
+            }
+        }
+        return true;
+    }
+
+    public static void timeoutAssert(
+            ExecutorService executorService, Runnable task, long time, TimeUnit timeUnit) {
+        Future future = executorService.submit(task);
+        try {
+            future.get(time, timeUnit);
+        } catch (InterruptedException e) {
+            throw new RuntimeException("Test failed to get the result.", e);
+        } catch (ExecutionException e) {
+            throw new RuntimeException("Test failed with some exception.", e);
+        } catch (TimeoutException e) {
+            throw new RuntimeException(
+                    String.format("Test timeout after %d %s.", time, timeUnit.name()), e);
+        } finally {
+            future.cancel(true);
+        }
+    }
+
+    public static void deletePath(Path path) throws IOException {
+        List<File> files =
+                Files.walk(path)
+                        .filter(p -> p != path)
+                        .map(Path::toFile)
+                        .collect(Collectors.toList());
+        for (File file : files) {
+            if (file.isDirectory()) {
+                deletePath(file.toPath());
+            } else {
+                file.delete();
+            }
+        }
+        Files.deleteIfExists(path);
+    }
+
+    public static boolean doubleEquals(double d0, double d1) {

Review comment:
       Also in Apache Commons:
   
   ```java
   import org.apache.commons.math3.util.Precision;
   Precision.equals(d0, d1);
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,527 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.ListSource;
+import org.apache.flink.connector.testframe.utils.MetricQueryer;
+import org.apache.flink.connector.testframe.utils.TestDataMatchers;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.TestDataMatchers.matchesMultipleSplitTestData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.appendResultData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.doubleEquals;
+import static org.apache.flink.connector.testframe.utils.TestUtils.timeoutAssert;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+    static ExecutorService executorService =
+            Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() * 2);
+
+    private final long jobExecuteTimeMs = 20000;
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        execEnv.fromCollection(testRecords)
+                .name("sourceInSinkTest")
+                .setParallelism(1)
+                .returns(externalContext.getProducedType())
+                .sinkTo(tryCreateSink(externalContext, sinkSettings))
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(Duration.ofSeconds(30)));
+
+        // Check test result
+        List<T> target = sort(testRecords);
+        checkResult(externalContext.createSinkDataReader(sinkSettings), target, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new ListSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        source.returns(externalContext.getProducedType())
+                .sinkTo(tryCreateSink(externalContext, sinkSettings))
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;
+        try {
+            final MetricQueryer queryRestClient =
+                    new MetricQueryer(new Configuration(), executorService);
+            waitForAllTaskRunning(
+                    () ->
+                            queryRestClient.getJobDetails(
+                                    testEnv.getRestEndpoint(), jobClient.getJobID()),
+                    Deadline.fromNow(Duration.ofSeconds(30)));
+
+            timeoutAssert(
+                    executorService,
+                    () -> {
+                        int count = 0;
+                        while (count < numBeforeSuccess && iterator.hasNext()) {

Review comment:
       What about using a helper function for readability?

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,527 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.ListSource;
+import org.apache.flink.connector.testframe.utils.MetricQueryer;
+import org.apache.flink.connector.testframe.utils.TestDataMatchers;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.TestDataMatchers.matchesMultipleSplitTestData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.appendResultData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.doubleEquals;
+import static org.apache.flink.connector.testframe.utils.TestUtils.timeoutAssert;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+    static ExecutorService executorService =
+            Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() * 2);
+
+    private final long jobExecuteTimeMs = 20000;
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        execEnv.fromCollection(testRecords)
+                .name("sourceInSinkTest")
+                .setParallelism(1)
+                .returns(externalContext.getProducedType())
+                .sinkTo(tryCreateSink(externalContext, sinkSettings))
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(Duration.ofSeconds(30)));
+
+        // Check test result
+        List<T> target = sort(testRecords);
+        checkResult(externalContext.createSinkDataReader(sinkSettings), target, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new ListSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        source.returns(externalContext.getProducedType())
+                .sinkTo(tryCreateSink(externalContext, sinkSettings))
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;
+        try {
+            final MetricQueryer queryRestClient =
+                    new MetricQueryer(new Configuration(), executorService);
+            waitForAllTaskRunning(
+                    () ->
+                            queryRestClient.getJobDetails(
+                                    testEnv.getRestEndpoint(), jobClient.getJobID()),
+                    Deadline.fromNow(Duration.ofSeconds(30)));
+
+            timeoutAssert(
+                    executorService,
+                    () -> {
+                        int count = 0;
+                        while (count < numBeforeSuccess && iterator.hasNext()) {
+                            iterator.next();
+                            count++;
+                        }
+                        if (count < numBeforeSuccess) {
+                            throw new IllegalStateException(
+                                    String.format("Fail to get %d records.", numBeforeSuccess));
+                        }
+                    },
+                    30,
+                    TimeUnit.SECONDS);
+            savepointDir =
+                    jobClient
+                            .stopWithSavepoint(true, testEnv.getCheckpointUri())
+                            .get(30, TimeUnit.SECONDS);
+            waitForJobStatus(
+                    jobClient,
+                    Collections.singletonList(JobStatus.FINISHED),
+                    Deadline.fromNow(Duration.ofSeconds(30)));
+        } catch (Exception e) {
+            killJob(jobClient);
+            throw e;
+        }
+
+        List<T> target = sort(testRecords.subList(0, numBeforeSuccess));
+        checkResult(externalContext.createSinkDataReader(sinkSettings), target, semantic, false);
+
+        // Step 4: restart the Flink job with the savepoint
+        final StreamExecutionEnvironment restartEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .setSavepointRestorePath(savepointDir)
+                                .build());
+        restartEnv.enableCheckpointing(50);
+
+        DataStreamSource<T> restartSource =
+                restartEnv
+                        .fromSource(
+                                new ListSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        testRecords.size()),
+                                WatermarkStrategy.noWatermarks(),
+                                "restartSource")
+                        .setParallelism(1);
+
+        restartSource
+                .returns(externalContext.getProducedType())
+                .sinkTo(tryCreateSink(externalContext, sinkSettings))
+                .setParallelism(afterParallelism);
+        addCollectSink(restartSource);
+        final JobClient restartJobClient = restartEnv.executeAsync("Restart Test");
+
+        try {
+            // Check the result
+            checkResult(
+                    externalContext.createSinkDataReader(sinkSettings),
+                    sort(testRecords),
+                    semantic);
+        } finally {
+            killJob(restartJobClient);
+            iterator.close();
+        }
+    }
+
+    /**
+     * Test connector sink metrics.
+     *
+     * <p>This test will create a sink in the external system, generate test data and write them to
+     * the sink via a Flink job. Then read and compare the metrics.
+     *
+     * <p>Now test: numRecordsOut
+     */
+    @TestTemplate
+    @DisplayName("Test sink metrics")
+    public void testMetrics(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        int parallelism = 2;
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // make sure use different names when executes multi times
+        String sinkName = "metricTestSink" + testRecords.hashCode();
+        final StreamExecutionEnvironment env =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        env.enableCheckpointing(50);
+
+        DataStreamSource<T> source =
+                env.fromSource(
+                                new ListSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        testRecords.size()),
+                                WatermarkStrategy.noWatermarks(),
+                                "metricTestSource")
+                        .setParallelism(1);
+
+        source.returns(externalContext.getProducedType())
+                .sinkTo(tryCreateSink(externalContext, sinkSettings))
+                .name(sinkName)
+                .setParallelism(parallelism);
+        final JobClient jobClient = env.executeAsync("Metrics Test");
+        final MetricQueryer queryRestClient =
+                new MetricQueryer(new Configuration(), executorService);
+        try {
+            waitForAllTaskRunning(
+                    () ->
+                            queryRestClient.getJobDetails(
+                                    testEnv.getRestEndpoint(), jobClient.getJobID()),
+                    Deadline.fromNow(Duration.ofSeconds(30)));
+
+            waitUntilCondition(
+                    () -> {
+                        // test metrics
+                        try {
+                            return compareSinkMetrics(
+                                    queryRestClient,
+                                    testEnv,
+                                    jobClient.getJobID(),
+                                    sinkName,
+                                    testRecords.size());
+                        } catch (Exception e) {
+                            // skip failed assert try
+                            return false;
+                        }
+                    },
+                    Deadline.fromNow(Duration.ofMillis(jobExecuteTimeMs)));
+        } finally {
+            // Clean up
+            killJob(jobClient);
+        }
+    }
+
+    // ----------------------------- Helper Functions ---------------------------------
+
+    /**
+     * Generate a set of test records.
+     *
+     * @param testingSinkSettings sink settings
+     * @param externalContext External context
+     * @return Collection of generated test records
+     */
+    protected List<T> generateTestData(
+            TestingSinkSettings testingSinkSettings,
+            DataStreamSinkExternalContext<T> externalContext) {
+        return externalContext.generateTestData(
+                testingSinkSettings, ThreadLocalRandom.current().nextLong());
+    }
+
+    /**
+     * Compare the test data with the result.
+     *
+     * @param reader the data reader for the sink
+     * @param testData the test data
+     * @param semantic the supported semantic, see {@link CheckpointingMode}
+     */
+    private void checkResult(
+            ExternalSystemDataReader<T> reader, List<T> testData, CheckpointingMode semantic)
+            throws Exception {
+        checkResult(reader, testData, semantic, true);
+    }
+
+    /**
+     * Compare the test data with the result.
+     *
+     * @param reader the data reader for the sink
+     * @param testData the test data
+     * @param semantic the supported semantic, see {@link CheckpointingMode}
+     * @param testDataAllInResult whether the result contains all the test data
+     */
+    private void checkResult(
+            ExternalSystemDataReader<T> reader,
+            List<T> testData,
+            CheckpointingMode semantic,
+            boolean testDataAllInResult)
+            throws Exception {
+        final ArrayList<T> result = new ArrayList<>();
+        final TestDataMatchers.MultipleSplitDataMatcher<T> matcher =
+                matchesMultipleSplitTestData(
+                        Arrays.asList(testData), semantic, testDataAllInResult);
+        waitUntilCondition(
+                () -> {
+                    appendResultData(result, reader, testData, 30, semantic);
+                    return matcher.matches(sort(result).iterator());
+                },
+                Deadline.fromNow(Duration.ofMillis(jobExecuteTimeMs)));
+    }
+
+    /** Compare the metrics. */
+    private boolean compareSinkMetrics(
+            MetricQueryer metricQueryer,
+            TestEnvironment testEnv,
+            JobID jobId,
+            String sinkName,
+            long allRecordSize)
+            throws Exception {
+        double sumNumRecordsOut =
+                metricQueryer.getMetricByRestApi(
+                        testEnv.getRestEndpoint(), jobId, sinkName, MetricNames.IO_NUM_RECORDS_OUT);
+        return doubleEquals(allRecordSize, sumNumRecordsOut);
+    }
+
+    /** Sort the list. */
+    private List<T> sort(List<T> list) {
+        return list.stream().sorted().collect(Collectors.toList());
+    }
+
+    private TestingSinkSettings getTestingSinkSettings(CheckpointingMode checkpointingMode) {
+        return TestingSinkSettings.builder().setCheckpointingMode(checkpointingMode).build();
+    }
+
+    private void killJob(JobClient jobClient) throws Exception {
+        terminateJob(jobClient, Duration.ofSeconds(30));
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.CANCELED),
+                Deadline.fromNow(Duration.ofSeconds(30)));
+    }
+
+    private Sink<T, ?, ?, ?> tryCreateSink(
+            DataStreamSinkExternalContext<T> context, TestingSinkSettings sinkSettings) {
+        try {
+            return context.createSink(sinkSettings);
+        } catch (UnsupportedOperationException e) {
+            // abort the test
+            throw new TestAbortedException("Not support this test.", e);

Review comment:
       "Cannot create a sink satisfying given options"

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/utils/MetricQueryer.java
##########
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.utils;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.runtime.rest.RestClient;
+import org.apache.flink.runtime.rest.messages.EmptyRequestBody;
+import org.apache.flink.runtime.rest.messages.JobIDPathParameter;
+import org.apache.flink.runtime.rest.messages.JobMessageParameters;
+import org.apache.flink.runtime.rest.messages.JobVertexIdPathParameter;
+import org.apache.flink.runtime.rest.messages.MessagePathParameter;
+import org.apache.flink.runtime.rest.messages.job.JobDetailsHeaders;
+import org.apache.flink.runtime.rest.messages.job.JobDetailsInfo;
+import org.apache.flink.runtime.rest.messages.job.metrics.AggregatedMetricsResponseBody;
+import org.apache.flink.runtime.rest.messages.job.metrics.AggregatedSubtaskMetricsHeaders;
+import org.apache.flink.runtime.rest.messages.job.metrics.AggregatedSubtaskMetricsParameters;
+import org.apache.flink.runtime.rest.messages.job.metrics.MetricsFilterParameter;
+import org.apache.flink.util.ConfigurationException;
+import org.apache.flink.util.StringUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Iterator;
+import java.util.concurrent.Executor;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/** The queryer is used to get job metrics by rest API. */
+public class MetricQueryer {
+    private static final Logger LOG = LoggerFactory.getLogger(MetricQueryer.class);
+    private RestClient restClient;
+
+    public MetricQueryer(Configuration configuration, Executor executor)

Review comment:
       Maybe `MetricQueryer` could construct its own and private executor instead of using an outside one?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466) 
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518",
       "triggerID" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   * c1619577228a3fde9684f2c85965d6d1f76addbf Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518) 
   * 1076a64c9f916fe9d8a23d38aafbd1f359b038d9 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466) 
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466) 
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] zentol commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r808067841



##########
File path: flink-test-utils-parent/flink-connector-test-utils/pom.xml
##########
@@ -95,4 +95,30 @@
 			<scope>compile</scope>
 		</dependency>
 	</dependencies>
+
+	<build>
+		<plugins>
+			<plugin>
+				<groupId>org.apache.maven.plugins</groupId>
+				<artifactId>maven-shade-plugin</artifactId>
+				<executions>
+					<execution>
+						<phase>package</phase>
+						<goals>
+							<goal>shade</goal>
+						</goals>
+						<configuration>
+							<shadedArtifactAttached>true</shadedArtifactAttached>
+							<shadedClassifierName>source</shadedClassifierName>
+							<artifactSet>
+								<includes>
+									<include>**/connector/testframe/source/**</include>

Review comment:
       If you are worried about _transitive_ dependencies, you could just exclude them in the other module.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 909c155557a856976df8b5be1729553873ecbd4b Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 909c155557a856976df8b5be1729553873ecbd4b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114) 
   * e3a0766cb731672fd5be68b79bf380c8577ea068 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * e3a0766cb731672fd5be68b79bf380c8577ea068 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d5e64bbb6debad7940d7ca05729ce57628127225 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] ruanhang1993 commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
ruanhang1993 commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r794275237



##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,527 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.ListSource;
+import org.apache.flink.connector.testframe.utils.MetricQueryer;
+import org.apache.flink.connector.testframe.utils.TestDataMatchers;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.TestDataMatchers.matchesMultipleSplitTestData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.appendResultData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.doubleEquals;
+import static org.apache.flink.connector.testframe.utils.TestUtils.timeoutAssert;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+    static ExecutorService executorService =
+            Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() * 2);
+
+    private final long jobExecuteTimeMs = 20000;
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        execEnv.fromCollection(testRecords)
+                .name("sourceInSinkTest")
+                .setParallelism(1)
+                .returns(externalContext.getProducedType())
+                .sinkTo(tryCreateSink(externalContext, sinkSettings))
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(Duration.ofSeconds(30)));
+
+        // Check test result
+        List<T> target = sort(testRecords);
+        checkResult(externalContext.createSinkDataReader(sinkSettings), target, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new ListSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        source.returns(externalContext.getProducedType())
+                .sinkTo(tryCreateSink(externalContext, sinkSettings))
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;
+        try {
+            final MetricQueryer queryRestClient =

Review comment:
       We need this information to check whether all the tasks are running before we stop the job with the savepoint. This client will provide job information to us.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d5e64bbb6debad7940d7ca05729ce57628127225 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474) 
   * 0034fb25f7fbbbcf302fb18626d7983f32732ca5 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0034fb25f7fbbbcf302fb18626d7983f32732ca5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466) 
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] zentol commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r807958309



##########
File path: flink-test-utils-parent/flink-connector-test-utils/pom.xml
##########
@@ -95,4 +95,30 @@
 			<scope>compile</scope>
 		</dependency>
 	</dependencies>
+
+	<build>
+		<plugins>
+			<plugin>
+				<groupId>org.apache.maven.plugins</groupId>
+				<artifactId>maven-shade-plugin</artifactId>
+				<executions>
+					<execution>
+						<phase>package</phase>
+						<goals>
+							<goal>shade</goal>
+						</goals>
+						<configuration>
+							<shadedArtifactAttached>true</shadedArtifactAttached>
+							<shadedClassifierName>source</shadedClassifierName>
+							<artifactSet>
+								<includes>
+									<include>**/connector/testframe/source/**</include>

Review comment:
       @leonardBang @ruanhang1993 What is this supposed to be?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] ruanhang1993 commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
ruanhang1993 commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r794256568



##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,527 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.ListSource;
+import org.apache.flink.connector.testframe.utils.MetricQueryer;
+import org.apache.flink.connector.testframe.utils.TestDataMatchers;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.UUID;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.TestDataMatchers.matchesMultipleSplitTestData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.appendResultData;
+import static org.apache.flink.connector.testframe.utils.TestUtils.doubleEquals;
+import static org.apache.flink.connector.testframe.utils.TestUtils.timeoutAssert;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {

Review comment:
       Before comparing the result with the generated test data, we need to sort these data and compare them. Or else we do not have a way to check result when the result in sink is unordered.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] ruanhang1993 commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
ruanhang1993 commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r808615714



##########
File path: flink-test-utils-parent/flink-connector-test-utils/pom.xml
##########
@@ -95,4 +95,30 @@
 			<scope>compile</scope>
 		</dependency>
 	</dependencies>
+
+	<build>
+		<plugins>
+			<plugin>
+				<groupId>org.apache.maven.plugins</groupId>
+				<artifactId>maven-shade-plugin</artifactId>
+				<executions>
+					<execution>
+						<phase>package</phase>
+						<goals>
+							<goal>shade</goal>
+						</goals>
+						<configuration>
+							<shadedArtifactAttached>true</shadedArtifactAttached>
+							<shadedClassifierName>source</shadedClassifierName>
+							<artifactSet>
+								<includes>
+									<include>**/connector/testframe/source/**</include>

Review comment:
       This is reasonable. I will raise a PR to remove this.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] zentol commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r808065325



##########
File path: flink-test-utils-parent/flink-connector-test-utils/pom.xml
##########
@@ -95,4 +95,30 @@
 			<scope>compile</scope>
 		</dependency>
 	</dependencies>
+
+	<build>
+		<plugins>
+			<plugin>
+				<groupId>org.apache.maven.plugins</groupId>
+				<artifactId>maven-shade-plugin</artifactId>
+				<executions>
+					<execution>
+						<phase>package</phase>
+						<goals>
+							<goal>shade</goal>
+						</goals>
+						<configuration>
+							<shadedArtifactAttached>true</shadedArtifactAttached>
+							<shadedClassifierName>source</shadedClassifierName>
+							<artifactSet>
+								<includes>
+									<include>**/connector/testframe/source/**</include>

Review comment:
       The normal jar doesn't contain any dependencies though. Unless a module explicitly says to bundle something, no dependencies are bundled.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * b8513c81bd9bc1e30efa4ea1fae35d30fd33472c Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375) 
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518",
       "triggerID" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   * c1619577228a3fde9684f2c85965d6d1f76addbf Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * b8513c81bd9bc1e30efa4ea1fae35d30fd33472c Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375) 
   * bc9871b19a43fd0b99e1b53336534d59612a119e UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466) 
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d12c135ebf7dcc56e9c26695ecc2a2c3f4853176 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469) 
   * d5e64bbb6debad7940d7ca05729ce57628127225 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * e3a0766cb731672fd5be68b79bf380c8577ea068 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369) 
   * d12c135ebf7dcc56e9c26695ecc2a2c3f4853176 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469) 
   * d5e64bbb6debad7940d7ca05729ce57628127225 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] ruanhang1993 commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
ruanhang1993 commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r794287082



##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/utils/MetricQueryer.java
##########
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.utils;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.runtime.rest.RestClient;
+import org.apache.flink.runtime.rest.messages.EmptyRequestBody;
+import org.apache.flink.runtime.rest.messages.JobIDPathParameter;
+import org.apache.flink.runtime.rest.messages.JobMessageParameters;
+import org.apache.flink.runtime.rest.messages.JobVertexIdPathParameter;
+import org.apache.flink.runtime.rest.messages.MessagePathParameter;
+import org.apache.flink.runtime.rest.messages.job.JobDetailsHeaders;
+import org.apache.flink.runtime.rest.messages.job.JobDetailsInfo;
+import org.apache.flink.runtime.rest.messages.job.metrics.AggregatedMetricsResponseBody;
+import org.apache.flink.runtime.rest.messages.job.metrics.AggregatedSubtaskMetricsHeaders;
+import org.apache.flink.runtime.rest.messages.job.metrics.AggregatedSubtaskMetricsParameters;
+import org.apache.flink.runtime.rest.messages.job.metrics.MetricsFilterParameter;
+import org.apache.flink.util.ConfigurationException;
+import org.apache.flink.util.StringUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Iterator;
+import java.util.concurrent.Executor;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/** The queryer is used to get job metrics by rest API. */
+public class MetricQueryer {
+    private static final Logger LOG = LoggerFactory.getLogger(MetricQueryer.class);
+    private RestClient restClient;
+
+    public MetricQueryer(Configuration configuration, Executor executor)
+            throws ConfigurationException {
+        restClient = new RestClient(configuration, executor);
+    }
+
+    public JobDetailsInfo getJobDetails(TestEnvironment.Endpoint endpoint, JobID jobId)

Review comment:
       This code is used to get the job vertex information.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466) 
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   * c1619577228a3fde9684f2c85965d6d1f76addbf UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] ruanhang1993 commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
ruanhang1993 commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r806806442



##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromLimitedElementsSourceReader.java
##########
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.ReaderOutput;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.core.io.InputStatus;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.List;
+
+import static org.apache.flink.core.io.InputStatus.MORE_AVAILABLE;
+
+/**
+ * A {@link SourceReader} implementation that reads data from a list. This source reader will stop
+ * reading at the given position and wait until the checkpoint or savepoint triggered.
+ *
+ * <p>This source reader is used when {@link FromElementsSource} creates readers with a fixed
+ * position.
+ */
+public class FromLimitedElementsSourceReader<T> extends FromElementsSourceReader<T> {

Review comment:
       fixed

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+
+        /**
+         * The job should stop after consume a specified number of records. In order to know when
+         * the specified number of records have been consumed, a collect sink is need to be watched.
+         */
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;
+        try {
+            final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());

Review comment:
       fixed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] PatrickRen commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
PatrickRen commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r806652616



##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSource.java
##########
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.Source;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.api.connector.source.SplitEnumeratorContext;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumState;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumStateSerializer;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumerator;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplitSerializer;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+import org.apache.flink.util.Preconditions;
+
+import java.util.List;
+
+/**
+ * A {@link Source} implementation that reads data from a list and stops reading at the fixed
+ * position. The source will wait until the checkpoint or savepoint triggered, the source is useful
+ * for connector tests.
+ *
+ * <p>Note: This parallelism of source must be 1.
+ */
+public class FromElementsSource<OUT> implements Source<OUT, FromElementsSplit, NoOpEnumState> {

Review comment:
       This source could be moved to `flink-streaming-java`, under the same module where `FromElementsFunction` exist. By moving this we can also get rid of the `flink-connector-testing.jar` created by the pom of `flink-end-to-end-tests-common-kafka`

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+
+        /**
+         * The job should stop after consume a specified number of records. In order to know when
+         * the specified number of records have been consumed, a collect sink is need to be watched.
+         */
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;
+        try {
+            final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());

Review comment:
       I still think it's weird to use MetricQuerier to get job details. Maybe a better way is to create a new RestClient here for getting job details instead of reusing the MetricQuerier. MetricQuerier should only be responsible for fetching metrics from cluster.

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromLimitedElementsSourceReader.java
##########
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.ReaderOutput;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.core.io.InputStatus;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.List;
+
+import static org.apache.flink.core.io.InputStatus.MORE_AVAILABLE;
+
+/**
+ * A {@link SourceReader} implementation that reads data from a list. This source reader will stop
+ * reading at the given position and wait until the checkpoint or savepoint triggered.
+ *
+ * <p>This source reader is used when {@link FromElementsSource} creates readers with a fixed
+ * position.
+ */
+public class FromLimitedElementsSourceReader<T> extends FromElementsSourceReader<T> {

Review comment:
       This class could be merged into `FromElementsSourceReader` with taking `limitedNum` as an extra parameter




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518",
       "triggerID" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31529",
       "triggerID" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "ebca9a1e955205c53ea919b863c9550642bc73db",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31536",
       "triggerID" : "ebca9a1e955205c53ea919b863c9550642bc73db",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   * c1619577228a3fde9684f2c85965d6d1f76addbf Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518) 
   * 1076a64c9f916fe9d8a23d38aafbd1f359b038d9 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31529) 
   * ebca9a1e955205c53ea919b863c9550642bc73db Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31536) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0034fb25f7fbbbcf302fb18626d7983f32732ca5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339) 
   * b8513c81bd9bc1e30efa4ea1fae35d30fd33472c UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] leonardBang commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
leonardBang commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r805352388



##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaDataReader.java
##########
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+
+import org.apache.kafka.clients.consumer.ConsumerRecord;
+import org.apache.kafka.clients.consumer.ConsumerRecords;
+import org.apache.kafka.clients.consumer.KafkaConsumer;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.errors.WakeupException;
+
+import java.time.Duration;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Properties;
+
+/** Kafka dataStream data reader. */
+public class KafkaDataReader implements ExternalSystemDataReader<String> {
+    private final KafkaConsumer<String, String> consumer;
+
+    public KafkaDataReader(Properties properties, Collection<TopicPartition> partitions) {
+        this.consumer = new KafkaConsumer<>(properties);
+        consumer.assign(partitions);
+        consumer.seekToBeginning(partitions);
+    }
+
+    @Override
+    public List<String> poll(Duration timeout) {
+        List<String> result = new LinkedList<>();
+        ConsumerRecords<String, String> consumerRecords;
+        try {
+            consumerRecords = consumer.poll(timeout);
+        } catch (WakeupException we) {
+            return Collections.emptyList();
+        }
+        Iterator<ConsumerRecord<String, String>> iterator = consumerRecords.iterator();
+        while (iterator.hasNext()) {
+            result.add(iterator.next().value());
+        }
+        return result;
+    }
+
+    @Override
+    public void close() throws Exception {
+        consumer.close();

Review comment:
       hint: check null before release resource

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;

Review comment:
       ```suggestion
       private static final long DEFAULT_TIMEOUT = 30L;
   ```

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;
+    private static final int NUM_RECORDS_UPPER_BOUND = 500;
+    private static final int NUM_RECORDS_LOWER_BOUND = 100;
+
+    protected String bootstrapServers;
+    protected final String topicName;
+
+    private final List<ExternalSystemDataReader<String>> readers = new ArrayList<>();
+
+    protected int numSplits = 0;
+
+    private List<URL> connectorJarPaths;
+
+    protected final AdminClient kafkaAdminClient;
+
+    public KafkaSinkExternalContext(String bootstrapServers, List<URL> connectorJarPaths) {
+        this.bootstrapServers = bootstrapServers;
+        this.connectorJarPaths = connectorJarPaths;
+        this.topicName =
+                TOPIC_NAME_PREFIX + "-" + ThreadLocalRandom.current().nextLong(Long.MAX_VALUE);
+        kafkaAdminClient = createAdminClient();
+    }
+
+    protected void createTopic(String topicName, int numPartitions, short replicationFactor) {
+        LOG.debug(
+                "Creating new Kafka topic {} with {} partitions and {} replicas",
+                topicName,
+                numPartitions,
+                replicationFactor);
+        NewTopic newTopic = new NewTopic(topicName, numPartitions, replicationFactor);
+        try {
+            kafkaAdminClient
+                    .createTopics(Collections.singletonList(newTopic))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            throw new RuntimeException(String.format("Cannot create topic '%s'", topicName), e);
+        }
+    }
+
+    protected void deleteTopic(String topicName) {
+        LOG.debug("Deleting Kafka topic {}", topicName);
+        try {
+            kafkaAdminClient
+                    .deleteTopics(Collections.singletonList(topicName))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) {
+                throw new RuntimeException(String.format("Cannot delete topic '%s'", topicName), e);
+            }
+        }
+    }
+
+    private AdminClient createAdminClient() {
+        Properties config = new Properties();

Review comment:
       ```suggestion
           final Properties config = new Properties();
   ```

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;
+    private static final int NUM_RECORDS_UPPER_BOUND = 500;
+    private static final int NUM_RECORDS_LOWER_BOUND = 100;
+
+    protected String bootstrapServers;
+    protected final String topicName;
+
+    private final List<ExternalSystemDataReader<String>> readers = new ArrayList<>();
+
+    protected int numSplits = 0;
+
+    private List<URL> connectorJarPaths;
+
+    protected final AdminClient kafkaAdminClient;
+
+    public KafkaSinkExternalContext(String bootstrapServers, List<URL> connectorJarPaths) {
+        this.bootstrapServers = bootstrapServers;
+        this.connectorJarPaths = connectorJarPaths;
+        this.topicName =
+                TOPIC_NAME_PREFIX + "-" + ThreadLocalRandom.current().nextLong(Long.MAX_VALUE);
+        kafkaAdminClient = createAdminClient();
+    }
+
+    protected void createTopic(String topicName, int numPartitions, short replicationFactor) {
+        LOG.debug(
+                "Creating new Kafka topic {} with {} partitions and {} replicas",
+                topicName,
+                numPartitions,
+                replicationFactor);
+        NewTopic newTopic = new NewTopic(topicName, numPartitions, replicationFactor);
+        try {
+            kafkaAdminClient
+                    .createTopics(Collections.singletonList(newTopic))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            throw new RuntimeException(String.format("Cannot create topic '%s'", topicName), e);
+        }
+    }
+
+    protected void deleteTopic(String topicName) {
+        LOG.debug("Deleting Kafka topic {}", topicName);
+        try {
+            kafkaAdminClient
+                    .deleteTopics(Collections.singletonList(topicName))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) {
+                throw new RuntimeException(String.format("Cannot delete topic '%s'", topicName), e);
+            }
+        }
+    }
+
+    private AdminClient createAdminClient() {
+        Properties config = new Properties();
+        config.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        return AdminClient.create(config);
+    }
+
+    @Override
+    public Sink<String> createSink(TestingSinkSettings sinkSettings) {
+        if (!topicExists(topicName)) {
+            createTopic(topicName, 4, (short) 1);
+        }
+
+        KafkaSinkBuilder<String> builder = KafkaSink.builder();
+        Properties properties = new Properties();
+        properties.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, 900000);

Review comment:
       ```suggestion
           final Properties properties = new Properties();
           properties.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, DEFAULT_TRANSACTION_TIMEOUT_IN_MS);
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSource.java
##########
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.Source;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.api.connector.source.SplitEnumeratorContext;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumState;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumStateSerializer;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumerator;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplitSerializer;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+
+import java.util.List;
+
+/**
+ * The source reads data from a list and stops reading at the fixed position. The source will wait
+ * until the checkpoint or savepoint triggers.
+ *
+ * <p>Note that this source must be of parallelism 1.
+ */
+public class FromElementsSource<OUT> implements Source<OUT, FromElementsSplit, NoOpEnumState> {
+    // Boundedness
+    private Boundedness boundedness;
+
+    private List<OUT> elements;
+
+    private Integer successNum;
+
+    public FromElementsSource(List<OUT> elements) {
+        this.elements = elements;
+    }
+
+    public FromElementsSource(Boundedness boundedness, List<OUT> elements, Integer successNum) {
+        this(elements);
+        if (successNum > elements.size()) {
+            throw new RuntimeException("SuccessNum must be larger than elements' size.");

Review comment:
       ```suggestion
   Preconditions.check(successNum > elements.size(), String.format("The successNum must be larger than the elements list %d, but actual  successNum is %d", elements.size(), successNum));
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSource.java
##########
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.Source;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.api.connector.source.SplitEnumeratorContext;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumState;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumStateSerializer;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumerator;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplitSerializer;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+
+import java.util.List;
+
+/**
+ * The source reads data from a list and stops reading at the fixed position. The source will wait
+ * until the checkpoint or savepoint triggers.

Review comment:
       ```suggestion
    * A  {@link Source} implementation that reads data from a list and stops reading at the fixed position. 
    * The source will wait until the checkpoint or savepoint triggered, the source is useful for connector tests.
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/split/FromElementsSplit.java
##########
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source.split;
+
+import org.apache.flink.api.connector.source.SourceSplit;
+
+/** The split of the list source. */

Review comment:
       ```suggestion
   /** The split of {@link FromElementsSource}. */
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+
+        /**
+         * The job should stop after consume a specified number of records. In order to know when
+         * the specified number of records have been consumed, a collect sink is need to be watched.
+         */
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;
+        try {
+            final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());
+            waitForAllTaskRunning(
+                    () ->
+                            queryRestClient.getJobDetails(
+                                    testEnv.getRestEndpoint(), jobClient.getJobID()),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+            waitExpectedSizeData(iterator, numBeforeSuccess);
+
+            savepointDir =
+                    jobClient
+                            .stopWithSavepoint(
+                                    true, testEnv.getCheckpointUri(), SavepointFormatType.CANONICAL)
+                            .get(30, TimeUnit.SECONDS);
+            waitForJobStatus(
+                    jobClient,
+                    Collections.singletonList(JobStatus.FINISHED),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+        } catch (Exception e) {
+            killJob(jobClient);
+            throw e;
+        }
+
+        List<T> target = testRecords.subList(0, numBeforeSuccess);
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), target, semantic);
+
+        // Step 4: restart the Flink job with the savepoint
+        final StreamExecutionEnvironment restartEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .setSavepointRestorePath(savepointDir)
+                                .build());
+        restartEnv.enableCheckpointing(50);
+
+        DataStreamSource<T> restartSource =
+                restartEnv
+                        .fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        testRecords.size()),
+                                WatermarkStrategy.noWatermarks(),
+                                "restartSource")
+                        .setParallelism(1);
+
+        DataStream<T> sinkStream = restartSource.returns(externalContext.getProducedType());
+        tryCreateSink(sinkStream, externalContext, sinkSettings).setParallelism(afterParallelism);
+        addCollectSink(restartSource);
+        final JobClient restartJobClient = restartEnv.executeAsync("Restart Test");
+
+        try {
+            // Check the result
+            checkResultWithSemantic(
+                    externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+        } finally {
+            killJob(restartJobClient);
+            iterator.close();
+        }
+    }
+
+    /**
+     * Test connector sink metrics.
+     *
+     * <p>This test will create a sink in the external system, generate test data and write them to
+     * the sink via a Flink job. Then read and compare the metrics.
+     *
+     * <p>Now test: numRecordsOut
+     */
+    @TestTemplate
+    @DisplayName("Test sink metrics")
+    public void testMetrics(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        int parallelism = 1;
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // make sure use different names when executes multi times
+        String sinkName = "metricTestSink" + testRecords.hashCode();
+        final StreamExecutionEnvironment env =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        env.enableCheckpointing(50);
+
+        DataStreamSource<T> source =
+                env.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        testRecords.size()),
+                                WatermarkStrategy.noWatermarks(),
+                                "metricTestSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name(sinkName)
+                .setParallelism(parallelism);
+        final JobClient jobClient = env.executeAsync("Metrics Test");
+        final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());
+        try {
+            waitForAllTaskRunning(
+                    () ->
+                            queryRestClient.getJobDetails(
+                                    testEnv.getRestEndpoint(), jobClient.getJobID()),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+            waitUntilCondition(
+                    () -> {
+                        // test metrics
+                        try {
+                            return compareSinkMetrics(
+                                    queryRestClient,
+                                    testEnv,
+                                    externalContext,
+                                    jobClient.getJobID(),
+                                    sinkName,
+                                    testRecords.size());
+                        } catch (Exception e) {
+                            // skip failed assert try
+                            return false;
+                        }
+                    },
+                    Deadline.fromNow(DEFAULT_COLLECT_DATA_TIMEOUT));
+        } finally {
+            // Clean up
+            killJob(jobClient);
+        }
+    }
+
+    // ----------------------------- Helper Functions ---------------------------------
+
+    /**
+     * Generate a set of test records.
+     *
+     * @param testingSinkSettings sink settings
+     * @param externalContext External context
+     * @return Collection of generated test records
+     */
+    protected List<T> generateTestData(
+            TestingSinkSettings testingSinkSettings,
+            DataStreamSinkExternalContext<T> externalContext) {
+        return externalContext.generateTestData(
+                testingSinkSettings, ThreadLocalRandom.current().nextLong());
+    }
+
+    /**
+     * Poll records from the sink.
+     *
+     * @param result Append records to which list
+     * @param reader The sink reader
+     * @param expected The expected list which help to stop polling
+     * @param retryTimes The retry times
+     * @param semantic The semantic
+     * @return Collection of records in the Sink
+     */
+    private List<T> appendResultData(

Review comment:
       The method name is different with the method java document

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+
+        /**
+         * The job should stop after consume a specified number of records. In order to know when
+         * the specified number of records have been consumed, a collect sink is need to be watched.
+         */
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;
+        try {
+            final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());
+            waitForAllTaskRunning(
+                    () ->
+                            queryRestClient.getJobDetails(
+                                    testEnv.getRestEndpoint(), jobClient.getJobID()),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+            waitExpectedSizeData(iterator, numBeforeSuccess);
+
+            savepointDir =
+                    jobClient
+                            .stopWithSavepoint(
+                                    true, testEnv.getCheckpointUri(), SavepointFormatType.CANONICAL)
+                            .get(30, TimeUnit.SECONDS);
+            waitForJobStatus(
+                    jobClient,
+                    Collections.singletonList(JobStatus.FINISHED),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+        } catch (Exception e) {
+            killJob(jobClient);
+            throw e;
+        }
+
+        List<T> target = testRecords.subList(0, numBeforeSuccess);
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), target, semantic);
+
+        // Step 4: restart the Flink job with the savepoint
+        final StreamExecutionEnvironment restartEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .setSavepointRestorePath(savepointDir)
+                                .build());
+        restartEnv.enableCheckpointing(50);
+
+        DataStreamSource<T> restartSource =
+                restartEnv
+                        .fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        testRecords.size()),
+                                WatermarkStrategy.noWatermarks(),
+                                "restartSource")
+                        .setParallelism(1);
+
+        DataStream<T> sinkStream = restartSource.returns(externalContext.getProducedType());
+        tryCreateSink(sinkStream, externalContext, sinkSettings).setParallelism(afterParallelism);
+        addCollectSink(restartSource);
+        final JobClient restartJobClient = restartEnv.executeAsync("Restart Test");
+
+        try {
+            // Check the result
+            checkResultWithSemantic(
+                    externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+        } finally {
+            killJob(restartJobClient);
+            iterator.close();
+        }
+    }
+
+    /**
+     * Test connector sink metrics.
+     *
+     * <p>This test will create a sink in the external system, generate test data and write them to
+     * the sink via a Flink job. Then read and compare the metrics.
+     *
+     * <p>Now test: numRecordsOut
+     */
+    @TestTemplate
+    @DisplayName("Test sink metrics")
+    public void testMetrics(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        int parallelism = 1;
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // make sure use different names when executes multi times
+        String sinkName = "metricTestSink" + testRecords.hashCode();
+        final StreamExecutionEnvironment env =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        env.enableCheckpointing(50);
+
+        DataStreamSource<T> source =
+                env.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        testRecords.size()),
+                                WatermarkStrategy.noWatermarks(),
+                                "metricTestSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name(sinkName)
+                .setParallelism(parallelism);
+        final JobClient jobClient = env.executeAsync("Metrics Test");
+        final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());
+        try {
+            waitForAllTaskRunning(
+                    () ->
+                            queryRestClient.getJobDetails(
+                                    testEnv.getRestEndpoint(), jobClient.getJobID()),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+            waitUntilCondition(
+                    () -> {
+                        // test metrics
+                        try {
+                            return compareSinkMetrics(
+                                    queryRestClient,
+                                    testEnv,
+                                    externalContext,
+                                    jobClient.getJobID(),
+                                    sinkName,
+                                    testRecords.size());
+                        } catch (Exception e) {
+                            // skip failed assert try
+                            return false;
+                        }
+                    },
+                    Deadline.fromNow(DEFAULT_COLLECT_DATA_TIMEOUT));
+        } finally {
+            // Clean up
+            killJob(jobClient);
+        }
+    }
+
+    // ----------------------------- Helper Functions ---------------------------------
+
+    /**
+     * Generate a set of test records.
+     *
+     * @param testingSinkSettings sink settings
+     * @param externalContext External context
+     * @return Collection of generated test records
+     */
+    protected List<T> generateTestData(
+            TestingSinkSettings testingSinkSettings,
+            DataStreamSinkExternalContext<T> externalContext) {
+        return externalContext.generateTestData(
+                testingSinkSettings, ThreadLocalRandom.current().nextLong());
+    }
+
+    /**
+     * Poll records from the sink.
+     *
+     * @param result Append records to which list
+     * @param reader The sink reader
+     * @param expected The expected list which help to stop polling
+     * @param retryTimes The retry times
+     * @param semantic The semantic
+     * @return Collection of records in the Sink
+     */
+    private List<T> appendResultData(
+            List<T> result,
+            ExternalSystemDataReader<T> reader,
+            List<T> expected,
+            int retryTimes,
+            CheckpointingMode semantic) {
+        long timeoutMs = 1000L;
+        int retryIndex = 0;
+
+        while (retryIndex++ < retryTimes
+                && !checkGetEnoughRecordsWithSemantic(expected, result, semantic)) {
+            result.addAll(reader.poll(Duration.ofMillis(timeoutMs)));
+        }
+        return result;
+    }
+
+    /**
+     * Check whether the polling should stop.
+     *
+     * @param expected The expected list which help to stop polling
+     * @param result The records that have been read
+     * @param semantic The semantic
+     * @return Whether the polling should stop
+     */
+    private boolean checkGetEnoughRecordsWithSemantic(
+            List<T> expected, List<T> result, CheckpointingMode semantic) {
+        checkNotNull(expected);
+        checkNotNull(result);
+        if (EXACTLY_ONCE.equals(semantic)) {
+            return expected.size() <= result.size();
+        } else if (AT_LEAST_ONCE.equals(semantic)) {
+            Set<Integer> matchedIndex = new HashSet<>();
+            for (T record : expected) {
+                int before = matchedIndex.size();
+                for (int i = 0; i < result.size(); i++) {
+                    if (matchedIndex.contains(i)) {
+                        continue;
+                    }
+                    if (record.equals(result.get(i))) {
+                        matchedIndex.add(i);
+                        break;
+                    }
+                }
+                // if not find the record in the result
+                if (before == matchedIndex.size()) {
+                    return false;
+                }
+            }
+            return true;
+        }
+        throw new IllegalStateException(
+                String.format("%s delivery guarantee doesn't support test.", semantic.name()));
+    }
+
+    /**
+     * Compare the test data with the result.

Review comment:
       ```suggestion
        * Compare the test data with actual data in given semantic.
   ```

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;
+    private static final int NUM_RECORDS_UPPER_BOUND = 500;
+    private static final int NUM_RECORDS_LOWER_BOUND = 100;
+
+    protected String bootstrapServers;
+    protected final String topicName;
+
+    private final List<ExternalSystemDataReader<String>> readers = new ArrayList<>();
+
+    protected int numSplits = 0;
+
+    private List<URL> connectorJarPaths;
+
+    protected final AdminClient kafkaAdminClient;
+
+    public KafkaSinkExternalContext(String bootstrapServers, List<URL> connectorJarPaths) {
+        this.bootstrapServers = bootstrapServers;
+        this.connectorJarPaths = connectorJarPaths;
+        this.topicName =
+                TOPIC_NAME_PREFIX + "-" + ThreadLocalRandom.current().nextLong(Long.MAX_VALUE);
+        kafkaAdminClient = createAdminClient();
+    }
+
+    protected void createTopic(String topicName, int numPartitions, short replicationFactor) {
+        LOG.debug(
+                "Creating new Kafka topic {} with {} partitions and {} replicas",
+                topicName,
+                numPartitions,
+                replicationFactor);
+        NewTopic newTopic = new NewTopic(topicName, numPartitions, replicationFactor);
+        try {
+            kafkaAdminClient
+                    .createTopics(Collections.singletonList(newTopic))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            throw new RuntimeException(String.format("Cannot create topic '%s'", topicName), e);
+        }
+    }
+
+    protected void deleteTopic(String topicName) {
+        LOG.debug("Deleting Kafka topic {}", topicName);
+        try {
+            kafkaAdminClient
+                    .deleteTopics(Collections.singletonList(topicName))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) {
+                throw new RuntimeException(String.format("Cannot delete topic '%s'", topicName), e);

Review comment:
       ```suggestion
                   throw new RuntimeException(String.format("Cannot delete unknown Kafka topic '%s'", topicName), e);
   ```

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContextFactory.java
##########
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.connector.testframe.external.ExternalContextFactory;
+
+import org.testcontainers.containers.KafkaContainer;
+
+import java.net.URL;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/** Kafka table sink external context factory. */

Review comment:
       typo: Kafka table ?

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/KafkaSinkITCase.java
##########
@@ -161,6 +177,49 @@ public void tearDown() throws ExecutionException, InterruptedException, TimeoutE
         deleteTestTopic(topic);
     }
 
+    /** Integration test based on connector testing framework. */
+    @Nested
+    class IntegrationTests extends SinkTestSuiteBase<String> {
+        // Defines test environment on Flink MiniCluster
+        @SuppressWarnings("unused")
+        @TestEnv
+        MiniClusterTestEnvironment flink = new MiniClusterTestEnvironment();
+
+        // Defines external system
+        @TestExternalSystem
+        DefaultContainerizedExternalSystem<KafkaContainer> kafka =
+                DefaultContainerizedExternalSystem.builder()
+                        .fromContainer(
+                                new KafkaContainer(
+                                        DockerImageName.parse(DockerImageVersions.KAFKA)))
+                        .build();
+
+        @SuppressWarnings("unused")
+        @TestSemantics
+        CheckpointingMode[] semantics =
+                new CheckpointingMode[] {
+                    CheckpointingMode.EXACTLY_ONCE, CheckpointingMode.AT_LEAST_ONCE
+                };
+
+        @SuppressWarnings("unused")
+        @TestContext
+        KafkaSinkExternalContextFactory sinkContext =
+                new KafkaSinkExternalContextFactory(kafka.getContainer(), Collections.emptyList());
+
+        /**
+         * Disable the metric test because of the metric
+         * bug(https://issues.apache.org/jira/browse/FLINK-26126).
+         */
+        @Disabled

Review comment:
       ```suggestion
       @Disabled("Skip metric test until FLINK-26126 fixed")
   ```

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;
+    private static final int NUM_RECORDS_UPPER_BOUND = 500;
+    private static final int NUM_RECORDS_LOWER_BOUND = 100;
+
+    protected String bootstrapServers;
+    protected final String topicName;
+
+    private final List<ExternalSystemDataReader<String>> readers = new ArrayList<>();
+
+    protected int numSplits = 0;
+
+    private List<URL> connectorJarPaths;
+
+    protected final AdminClient kafkaAdminClient;
+
+    public KafkaSinkExternalContext(String bootstrapServers, List<URL> connectorJarPaths) {
+        this.bootstrapServers = bootstrapServers;
+        this.connectorJarPaths = connectorJarPaths;
+        this.topicName =
+                TOPIC_NAME_PREFIX + "-" + ThreadLocalRandom.current().nextLong(Long.MAX_VALUE);
+        kafkaAdminClient = createAdminClient();
+    }
+
+    protected void createTopic(String topicName, int numPartitions, short replicationFactor) {
+        LOG.debug(
+                "Creating new Kafka topic {} with {} partitions and {} replicas",
+                topicName,
+                numPartitions,
+                replicationFactor);
+        NewTopic newTopic = new NewTopic(topicName, numPartitions, replicationFactor);
+        try {
+            kafkaAdminClient
+                    .createTopics(Collections.singletonList(newTopic))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            throw new RuntimeException(String.format("Cannot create topic '%s'", topicName), e);
+        }
+    }
+
+    protected void deleteTopic(String topicName) {
+        LOG.debug("Deleting Kafka topic {}", topicName);
+        try {
+            kafkaAdminClient
+                    .deleteTopics(Collections.singletonList(topicName))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) {
+                throw new RuntimeException(String.format("Cannot delete topic '%s'", topicName), e);
+            }
+        }
+    }

Review comment:
       Do we need this methods  to be `protected` ?

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;
+    private static final int NUM_RECORDS_UPPER_BOUND = 500;
+    private static final int NUM_RECORDS_LOWER_BOUND = 100;
+
+    protected String bootstrapServers;
+    protected final String topicName;
+
+    private final List<ExternalSystemDataReader<String>> readers = new ArrayList<>();
+
+    protected int numSplits = 0;
+
+    private List<URL> connectorJarPaths;
+
+    protected final AdminClient kafkaAdminClient;
+
+    public KafkaSinkExternalContext(String bootstrapServers, List<URL> connectorJarPaths) {
+        this.bootstrapServers = bootstrapServers;
+        this.connectorJarPaths = connectorJarPaths;
+        this.topicName =
+                TOPIC_NAME_PREFIX + "-" + ThreadLocalRandom.current().nextLong(Long.MAX_VALUE);
+        kafkaAdminClient = createAdminClient();
+    }
+
+    protected void createTopic(String topicName, int numPartitions, short replicationFactor) {
+        LOG.debug(
+                "Creating new Kafka topic {} with {} partitions and {} replicas",
+                topicName,
+                numPartitions,
+                replicationFactor);
+        NewTopic newTopic = new NewTopic(topicName, numPartitions, replicationFactor);
+        try {
+            kafkaAdminClient
+                    .createTopics(Collections.singletonList(newTopic))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            throw new RuntimeException(String.format("Cannot create topic '%s'", topicName), e);
+        }
+    }
+
+    protected void deleteTopic(String topicName) {
+        LOG.debug("Deleting Kafka topic {}", topicName);
+        try {
+            kafkaAdminClient
+                    .deleteTopics(Collections.singletonList(topicName))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) {
+                throw new RuntimeException(String.format("Cannot delete topic '%s'", topicName), e);
+            }
+        }
+    }
+
+    private AdminClient createAdminClient() {
+        Properties config = new Properties();
+        config.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        return AdminClient.create(config);
+    }
+
+    @Override
+    public Sink<String> createSink(TestingSinkSettings sinkSettings) {
+        if (!topicExists(topicName)) {
+            createTopic(topicName, 4, (short) 1);
+        }
+
+        KafkaSinkBuilder<String> builder = KafkaSink.builder();
+        Properties properties = new Properties();
+        properties.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, 900000);
+        builder.setBootstrapServers(bootstrapServers)
+                .setDeliverGuarantee(toDeliveryGuarantee(sinkSettings.getCheckpointingMode()))
+                .setTransactionalIdPrefix("testingFramework")
+                .setKafkaProducerConfig(properties)
+                .setRecordSerializer(
+                        KafkaRecordSerializationSchema.builder()
+                                .setTopic(topicName)
+                                .setValueSerializationSchema(new SimpleStringSchema())
+                                .build());
+        return builder.build();
+    }
+
+    @Override
+    public ExternalSystemDataReader<String> createSinkDataReader(TestingSinkSettings sinkSettings) {
+        LOG.info("Fetching descriptions for topic: {}", topicName);

Review comment:
       I didn't get this meaning of log.

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;
+    private static final int NUM_RECORDS_UPPER_BOUND = 500;
+    private static final int NUM_RECORDS_LOWER_BOUND = 100;
+
+    protected String bootstrapServers;
+    protected final String topicName;
+
+    private final List<ExternalSystemDataReader<String>> readers = new ArrayList<>();
+
+    protected int numSplits = 0;
+
+    private List<URL> connectorJarPaths;
+
+    protected final AdminClient kafkaAdminClient;
+
+    public KafkaSinkExternalContext(String bootstrapServers, List<URL> connectorJarPaths) {
+        this.bootstrapServers = bootstrapServers;
+        this.connectorJarPaths = connectorJarPaths;
+        this.topicName =
+                TOPIC_NAME_PREFIX + "-" + ThreadLocalRandom.current().nextLong(Long.MAX_VALUE);
+        kafkaAdminClient = createAdminClient();
+    }
+
+    protected void createTopic(String topicName, int numPartitions, short replicationFactor) {
+        LOG.debug(
+                "Creating new Kafka topic {} with {} partitions and {} replicas",
+                topicName,
+                numPartitions,
+                replicationFactor);
+        NewTopic newTopic = new NewTopic(topicName, numPartitions, replicationFactor);
+        try {
+            kafkaAdminClient
+                    .createTopics(Collections.singletonList(newTopic))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            throw new RuntimeException(String.format("Cannot create topic '%s'", topicName), e);
+        }
+    }
+
+    protected void deleteTopic(String topicName) {
+        LOG.debug("Deleting Kafka topic {}", topicName);
+        try {
+            kafkaAdminClient
+                    .deleteTopics(Collections.singletonList(topicName))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) {
+                throw new RuntimeException(String.format("Cannot delete topic '%s'", topicName), e);
+            }
+        }
+    }
+
+    private AdminClient createAdminClient() {
+        Properties config = new Properties();
+        config.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        return AdminClient.create(config);
+    }
+
+    @Override
+    public Sink<String> createSink(TestingSinkSettings sinkSettings) {
+        if (!topicExists(topicName)) {
+            createTopic(topicName, 4, (short) 1);
+        }
+
+        KafkaSinkBuilder<String> builder = KafkaSink.builder();
+        Properties properties = new Properties();
+        properties.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, 900000);
+        builder.setBootstrapServers(bootstrapServers)
+                .setDeliverGuarantee(toDeliveryGuarantee(sinkSettings.getCheckpointingMode()))
+                .setTransactionalIdPrefix("testingFramework")
+                .setKafkaProducerConfig(properties)
+                .setRecordSerializer(
+                        KafkaRecordSerializationSchema.builder()
+                                .setTopic(topicName)
+                                .setValueSerializationSchema(new SimpleStringSchema())
+                                .build());
+        return builder.build();
+    }
+
+    @Override
+    public ExternalSystemDataReader<String> createSinkDataReader(TestingSinkSettings sinkSettings) {
+        LOG.info("Fetching descriptions for topic: {}", topicName);
+        final Map<String, TopicDescription> topicMetadata =
+                getTopicMetadata(Arrays.asList(topicName));
+
+        Set<TopicPartition> subscribedPartitions = new HashSet<>();
+        for (TopicDescription topic : topicMetadata.values()) {
+            for (TopicPartitionInfo partition : topic.partitions()) {
+                subscribedPartitions.add(new TopicPartition(topic.name(), partition.partition()));
+            }
+        }
+
+        Properties properties = new Properties();
+        properties.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG,
+                "flink-kafka-test" + subscribedPartitions.hashCode());
+        properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        properties.setProperty(
+                ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
+                StringDeserializer.class.getCanonicalName());
+        properties.setProperty(
+                ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
+                StringDeserializer.class.getCanonicalName());
+        if (EXACTLY_ONCE.equals(sinkSettings.getCheckpointingMode())) {
+            // default is read_uncommitted
+            properties.setProperty(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
+        }
+        properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
+        readers.add(new KafkaDataReader(properties, subscribedPartitions));
+        return readers.get(readers.size() - 1);
+    }
+
+    @Override
+    public List<String> generateTestData(TestingSinkSettings sinkSettings, long seed) {
+        Random random = new Random(seed);
+        List<String> randomStringRecords = new ArrayList<>();
+        int recordNum =
+                random.nextInt(NUM_RECORDS_UPPER_BOUND - NUM_RECORDS_LOWER_BOUND)
+                        + NUM_RECORDS_LOWER_BOUND;
+        for (int i = 0; i < recordNum; i++) {
+            int stringLength = random.nextInt(50) + 1;
+            randomStringRecords.add(generateRandomString(stringLength, random));
+        }
+        return randomStringRecords;
+    }
+
+    private String generateRandomString(int length, Random random) {
+        String alphaNumericString =
+                "ABCDEFGHIJKLMNOPQRSTUVWXYZ" + "abcdefghijklmnopqrstuvwxyz" + "0123456789";
+        StringBuilder sb = new StringBuilder();
+        for (int i = 0; i < length; ++i) {
+            sb.append(alphaNumericString.charAt(random.nextInt(alphaNumericString.length())));
+        }
+        return sb.toString();
+    }
+
+    protected Map<String, TopicDescription> getTopicMetadata(List<String> topics) {
+        try {
+            return kafkaAdminClient.describeTopics(topics).all().get();
+        } catch (Exception e) {
+            throw new RuntimeException(
+                    String.format("Failed to get metadata for topics %s.", topics), e);
+        }
+    }
+
+    private boolean topicExists(String topic) {
+        try {
+            kafkaAdminClient.describeTopics(Arrays.asList(topic)).all().get();
+            return true;
+        } catch (Exception e) {
+            return false;
+        }
+    }
+
+    @Override
+    public void close() {
+        if (numSplits != 0) {
+            deleteTopic(topicName);
+        }
+        readers.forEach(

Review comment:
       check null before cleanup/release resources

##########
File path: flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/src/test/java/org/apache/flink/tests/util/kafka/KafkaSinkE2ECase.java
##########
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.tests.util.kafka;
+
+import org.apache.flink.connector.kafka.sink.testutils.KafkaSinkExternalContextFactory;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.external.DefaultContainerizedExternalSystem;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.junit.annotations.TestContext;
+import org.apache.flink.connector.testframe.junit.annotations.TestEnv;
+import org.apache.flink.connector.testframe.junit.annotations.TestExternalSystem;
+import org.apache.flink.connector.testframe.junit.annotations.TestSemantics;
+import org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.tests.util.TestUtils;
+import org.apache.flink.tests.util.flink.FlinkContainerTestEnvironment;
+import org.apache.flink.util.DockerImageVersions;
+
+import org.junit.jupiter.api.Disabled;
+import org.junit.jupiter.api.TestTemplate;
+import org.testcontainers.containers.KafkaContainer;
+import org.testcontainers.utility.DockerImageName;
+
+import java.util.Arrays;
+
+/** Kafka sink E2E test based on connector testing framework. */
+@SuppressWarnings("unused")
+public class KafkaSinkE2ECase extends SinkTestSuiteBase<String> {
+    private static final String KAFKA_HOSTNAME = "kafka";
+
+    @TestSemantics
+    CheckpointingMode[] semantics =
+            new CheckpointingMode[] {
+                CheckpointingMode.EXACTLY_ONCE, CheckpointingMode.AT_LEAST_ONCE
+            };
+
+    // Defines TestEnvironment
+    @TestEnv FlinkContainerTestEnvironment flink = new FlinkContainerTestEnvironment(1, 6);
+
+    // Defines ConnectorExternalSystem
+    @TestExternalSystem
+    DefaultContainerizedExternalSystem<KafkaContainer> kafka =
+            DefaultContainerizedExternalSystem.builder()
+                    .fromContainer(
+                            new KafkaContainer(DockerImageName.parse(DockerImageVersions.KAFKA))
+                                    .withNetworkAliases(KAFKA_HOSTNAME))
+                    .bindWithFlinkContainer(flink.getFlinkContainers().getJobManager())
+                    .build();
+
+    // Defines 2 External context Factories, so test cases will be invoked twice using these two
+    // kinds of external contexts.
+    @SuppressWarnings("unused")

Review comment:
       redundant annotation with line 43

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/external/sink/DataStreamSinkV1ExternalContext.java
##########
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.external.sink;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.connector.sink.Sink;
+
+/**
+ * External context for DataStream sinks which is sink version 1.

Review comment:
       ```suggestion
    * External context for DataStream sinks whose version is V1.
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/external/sink/DataStreamSinkV2ExternalContext.java
##########
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.external.sink;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.connector.sink2.Sink;
+
+/**
+ * External context for DataStream sinks which is sink version 2.

Review comment:
       ```suggestion
    * External context for DataStream sinks whose version is V2.
   ```

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;
+    private static final int NUM_RECORDS_UPPER_BOUND = 500;
+    private static final int NUM_RECORDS_LOWER_BOUND = 100;
+
+    protected String bootstrapServers;
+    protected final String topicName;
+
+    private final List<ExternalSystemDataReader<String>> readers = new ArrayList<>();
+
+    protected int numSplits = 0;
+
+    private List<URL> connectorJarPaths;
+
+    protected final AdminClient kafkaAdminClient;
+
+    public KafkaSinkExternalContext(String bootstrapServers, List<URL> connectorJarPaths) {
+        this.bootstrapServers = bootstrapServers;
+        this.connectorJarPaths = connectorJarPaths;
+        this.topicName =
+                TOPIC_NAME_PREFIX + "-" + ThreadLocalRandom.current().nextLong(Long.MAX_VALUE);
+        kafkaAdminClient = createAdminClient();
+    }
+
+    protected void createTopic(String topicName, int numPartitions, short replicationFactor) {
+        LOG.debug(
+                "Creating new Kafka topic {} with {} partitions and {} replicas",
+                topicName,
+                numPartitions,
+                replicationFactor);
+        NewTopic newTopic = new NewTopic(topicName, numPartitions, replicationFactor);
+        try {
+            kafkaAdminClient
+                    .createTopics(Collections.singletonList(newTopic))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            throw new RuntimeException(String.format("Cannot create topic '%s'", topicName), e);
+        }
+    }
+
+    protected void deleteTopic(String topicName) {
+        LOG.debug("Deleting Kafka topic {}", topicName);
+        try {
+            kafkaAdminClient
+                    .deleteTopics(Collections.singletonList(topicName))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) {
+                throw new RuntimeException(String.format("Cannot delete topic '%s'", topicName), e);
+            }
+        }
+    }
+
+    private AdminClient createAdminClient() {
+        Properties config = new Properties();
+        config.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        return AdminClient.create(config);
+    }
+
+    @Override
+    public Sink<String> createSink(TestingSinkSettings sinkSettings) {
+        if (!topicExists(topicName)) {
+            createTopic(topicName, 4, (short) 1);
+        }
+
+        KafkaSinkBuilder<String> builder = KafkaSink.builder();
+        Properties properties = new Properties();
+        properties.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, 900000);
+        builder.setBootstrapServers(bootstrapServers)
+                .setDeliverGuarantee(toDeliveryGuarantee(sinkSettings.getCheckpointingMode()))
+                .setTransactionalIdPrefix("testingFramework")
+                .setKafkaProducerConfig(properties)
+                .setRecordSerializer(
+                        KafkaRecordSerializationSchema.builder()
+                                .setTopic(topicName)
+                                .setValueSerializationSchema(new SimpleStringSchema())
+                                .build());
+        return builder.build();
+    }
+
+    @Override
+    public ExternalSystemDataReader<String> createSinkDataReader(TestingSinkSettings sinkSettings) {
+        LOG.info("Fetching descriptions for topic: {}", topicName);
+        final Map<String, TopicDescription> topicMetadata =
+                getTopicMetadata(Arrays.asList(topicName));
+
+        Set<TopicPartition> subscribedPartitions = new HashSet<>();
+        for (TopicDescription topic : topicMetadata.values()) {
+            for (TopicPartitionInfo partition : topic.partitions()) {
+                subscribedPartitions.add(new TopicPartition(topic.name(), partition.partition()));
+            }
+        }
+
+        Properties properties = new Properties();
+        properties.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG,
+                "flink-kafka-test" + subscribedPartitions.hashCode());
+        properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        properties.setProperty(
+                ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
+                StringDeserializer.class.getCanonicalName());
+        properties.setProperty(
+                ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
+                StringDeserializer.class.getCanonicalName());
+        if (EXACTLY_ONCE.equals(sinkSettings.getCheckpointingMode())) {
+            // default is read_uncommitted
+            properties.setProperty(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
+        }
+        properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
+        readers.add(new KafkaDataReader(properties, subscribedPartitions));
+        return readers.get(readers.size() - 1);
+    }
+
+    @Override
+    public List<String> generateTestData(TestingSinkSettings sinkSettings, long seed) {
+        Random random = new Random(seed);
+        List<String> randomStringRecords = new ArrayList<>();
+        int recordNum =
+                random.nextInt(NUM_RECORDS_UPPER_BOUND - NUM_RECORDS_LOWER_BOUND)
+                        + NUM_RECORDS_LOWER_BOUND;
+        for (int i = 0; i < recordNum; i++) {
+            int stringLength = random.nextInt(50) + 1;
+            randomStringRecords.add(generateRandomString(stringLength, random));
+        }
+        return randomStringRecords;
+    }
+
+    private String generateRandomString(int length, Random random) {
+        String alphaNumericString =
+                "ABCDEFGHIJKLMNOPQRSTUVWXYZ" + "abcdefghijklmnopqrstuvwxyz" + "0123456789";
+        StringBuilder sb = new StringBuilder();
+        for (int i = 0; i < length; ++i) {
+            sb.append(alphaNumericString.charAt(random.nextInt(alphaNumericString.length())));
+        }
+        return sb.toString();

Review comment:
       The test data pattern `alphaNumericString` and magic number `50` can be constants with one line note

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSource.java
##########
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.Source;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.api.connector.source.SplitEnumeratorContext;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumState;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumStateSerializer;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumerator;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplitSerializer;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+
+import java.util.List;
+
+/**
+ * The source reads data from a list and stops reading at the fixed position. The source will wait
+ * until the checkpoint or savepoint triggers.
+ *
+ * <p>Note that this source must be of parallelism 1.
+ */
+public class FromElementsSource<OUT> implements Source<OUT, FromElementsSplit, NoOpEnumState> {
+    // Boundedness

Review comment:
       useless java doc

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.

Review comment:
       `as source splits.` ?

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSource.java
##########
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.Source;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.api.connector.source.SplitEnumeratorContext;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumState;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumStateSerializer;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumerator;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplitSerializer;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+
+import java.util.List;
+
+/**
+ * The source reads data from a list and stops reading at the fixed position. The source will wait
+ * until the checkpoint or savepoint triggers.
+ *
+ * <p>Note that this source must be of parallelism 1.

Review comment:
       ```suggestion
    * <p>Note: This parallelism of source must be 1.
   ```

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContextFactory.java
##########
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.connector.testframe.external.ExternalContextFactory;
+
+import org.testcontainers.containers.KafkaContainer;
+
+import java.net.URL;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/** Kafka table sink external context factory. */
+public class KafkaSinkExternalContextFactory
+        implements ExternalContextFactory<KafkaSinkExternalContext> {
+
+    private final KafkaContainer kafkaContainer;
+    private final List<URL> connectorJars;
+
+    public KafkaSinkExternalContextFactory(KafkaContainer kafkaContainer, List<URL> connectorJars) {
+        this.kafkaContainer = kafkaContainer;
+        this.connectorJars = connectorJars;
+    }
+
+    protected String getBootstrapServer() {

Review comment:
       No need `protected` 

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;
+    private static final int NUM_RECORDS_UPPER_BOUND = 500;
+    private static final int NUM_RECORDS_LOWER_BOUND = 100;
+
+    protected String bootstrapServers;
+    protected final String topicName;
+
+    private final List<ExternalSystemDataReader<String>> readers = new ArrayList<>();
+
+    protected int numSplits = 0;
+
+    private List<URL> connectorJarPaths;
+
+    protected final AdminClient kafkaAdminClient;
+
+    public KafkaSinkExternalContext(String bootstrapServers, List<URL> connectorJarPaths) {
+        this.bootstrapServers = bootstrapServers;
+        this.connectorJarPaths = connectorJarPaths;
+        this.topicName =
+                TOPIC_NAME_PREFIX + "-" + ThreadLocalRandom.current().nextLong(Long.MAX_VALUE);
+        kafkaAdminClient = createAdminClient();
+    }
+
+    protected void createTopic(String topicName, int numPartitions, short replicationFactor) {
+        LOG.debug(
+                "Creating new Kafka topic {} with {} partitions and {} replicas",
+                topicName,
+                numPartitions,
+                replicationFactor);
+        NewTopic newTopic = new NewTopic(topicName, numPartitions, replicationFactor);
+        try {
+            kafkaAdminClient
+                    .createTopics(Collections.singletonList(newTopic))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            throw new RuntimeException(String.format("Cannot create topic '%s'", topicName), e);
+        }
+    }
+
+    protected void deleteTopic(String topicName) {
+        LOG.debug("Deleting Kafka topic {}", topicName);
+        try {
+            kafkaAdminClient
+                    .deleteTopics(Collections.singletonList(topicName))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) {
+                throw new RuntimeException(String.format("Cannot delete topic '%s'", topicName), e);
+            }
+        }
+    }
+
+    private AdminClient createAdminClient() {
+        Properties config = new Properties();
+        config.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        return AdminClient.create(config);
+    }
+
+    @Override
+    public Sink<String> createSink(TestingSinkSettings sinkSettings) {
+        if (!topicExists(topicName)) {
+            createTopic(topicName, 4, (short) 1);
+        }
+
+        KafkaSinkBuilder<String> builder = KafkaSink.builder();
+        Properties properties = new Properties();
+        properties.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, 900000);
+        builder.setBootstrapServers(bootstrapServers)
+                .setDeliverGuarantee(toDeliveryGuarantee(sinkSettings.getCheckpointingMode()))
+                .setTransactionalIdPrefix("testingFramework")
+                .setKafkaProducerConfig(properties)
+                .setRecordSerializer(
+                        KafkaRecordSerializationSchema.builder()
+                                .setTopic(topicName)
+                                .setValueSerializationSchema(new SimpleStringSchema())
+                                .build());
+        return builder.build();
+    }
+
+    @Override
+    public ExternalSystemDataReader<String> createSinkDataReader(TestingSinkSettings sinkSettings) {
+        LOG.info("Fetching descriptions for topic: {}", topicName);
+        final Map<String, TopicDescription> topicMetadata =
+                getTopicMetadata(Arrays.asList(topicName));
+
+        Set<TopicPartition> subscribedPartitions = new HashSet<>();
+        for (TopicDescription topic : topicMetadata.values()) {
+            for (TopicPartitionInfo partition : topic.partitions()) {
+                subscribedPartitions.add(new TopicPartition(topic.name(), partition.partition()));
+            }
+        }
+
+        Properties properties = new Properties();
+        properties.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG,
+                "flink-kafka-test" + subscribedPartitions.hashCode());
+        properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        properties.setProperty(
+                ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
+                StringDeserializer.class.getCanonicalName());
+        properties.setProperty(
+                ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
+                StringDeserializer.class.getCanonicalName());
+        if (EXACTLY_ONCE.equals(sinkSettings.getCheckpointingMode())) {
+            // default is read_uncommitted
+            properties.setProperty(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
+        }
+        properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
+        readers.add(new KafkaDataReader(properties, subscribedPartitions));
+        return readers.get(readers.size() - 1);
+    }
+
+    @Override
+    public List<String> generateTestData(TestingSinkSettings sinkSettings, long seed) {
+        Random random = new Random(seed);
+        List<String> randomStringRecords = new ArrayList<>();
+        int recordNum =
+                random.nextInt(NUM_RECORDS_UPPER_BOUND - NUM_RECORDS_LOWER_BOUND)
+                        + NUM_RECORDS_LOWER_BOUND;
+        for (int i = 0; i < recordNum; i++) {
+            int stringLength = random.nextInt(50) + 1;
+            randomStringRecords.add(generateRandomString(stringLength, random));
+        }
+        return randomStringRecords;
+    }
+
+    private String generateRandomString(int length, Random random) {
+        String alphaNumericString =
+                "ABCDEFGHIJKLMNOPQRSTUVWXYZ" + "abcdefghijklmnopqrstuvwxyz" + "0123456789";
+        StringBuilder sb = new StringBuilder();
+        for (int i = 0; i < length; ++i) {
+            sb.append(alphaNumericString.charAt(random.nextInt(alphaNumericString.length())));
+        }
+        return sb.toString();
+    }
+
+    protected Map<String, TopicDescription> getTopicMetadata(List<String> topics) {
+        try {
+            return kafkaAdminClient.describeTopics(topics).all().get();
+        } catch (Exception e) {
+            throw new RuntimeException(
+                    String.format("Failed to get metadata for topics %s.", topics), e);
+        }
+    }
+
+    private boolean topicExists(String topic) {
+        try {
+            kafkaAdminClient.describeTopics(Arrays.asList(topic)).all().get();
+            return true;
+        } catch (Exception e) {
+            return false;
+        }
+    }
+
+    @Override
+    public void close() {
+        if (numSplits != 0) {
+            deleteTopic(topicName);
+        }
+        readers.forEach(
+                reader -> {
+                    try {
+                        reader.close();
+                    } catch (Exception e) {
+                        kafkaAdminClient.close();
+                        throw new RuntimeException("Cannot close split writer", e);
+                    }
+                });
+        readers.clear();
+        kafkaAdminClient.close();
+    }
+
+    @Override
+    public String toString() {
+        return "Single-topic Kafka";
+    }
+
+    @Override
+    public List<URL> getConnectorJarPaths() {
+        return connectorJarPaths;
+    }
+
+    @Override
+    public TypeInformation<String> getProducedType() {
+        return TypeInformation.of(String.class);
+    }
+
+    private DeliveryGuarantee toDeliveryGuarantee(CheckpointingMode checkpointingMode) {
+        switch (checkpointingMode) {
+            case EXACTLY_ONCE:
+                return DeliveryGuarantee.EXACTLY_ONCE;
+            case AT_LEAST_ONCE:
+                return DeliveryGuarantee.AT_LEAST_ONCE;
+            default:
+                throw new IllegalArgumentException(
+                        "Only exactly-once and al-least-once checkpointing mode are supported");

Review comment:
       ```suggestion
                   throw new IllegalArgumentException(
                           String.format("Only exactly-once and al-least-once checkpointing mode are supported, but actual is %s.", checkpointingMode));
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSource.java
##########
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.Source;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.api.connector.source.SplitEnumeratorContext;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumState;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumStateSerializer;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumerator;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplitSerializer;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+
+import java.util.List;
+
+/**
+ * The source reads data from a list and stops reading at the fixed position. The source will wait
+ * until the checkpoint or savepoint triggers.
+ *
+ * <p>Note that this source must be of parallelism 1.
+ */
+public class FromElementsSource<OUT> implements Source<OUT, FromElementsSplit, NoOpEnumState> {
+    // Boundedness
+    private Boundedness boundedness;
+
+    private List<OUT> elements;
+
+    private Integer successNum;
+
+    public FromElementsSource(List<OUT> elements) {
+        this.elements = elements;
+    }
+
+    public FromElementsSource(Boundedness boundedness, List<OUT> elements, Integer successNum) {

Review comment:
       how about `emittedElementsNum` ?

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSourceReader.java
##########
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.ReaderOutput;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.core.io.InputStatus;
+import org.apache.flink.metrics.Counter;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.core.io.InputStatus.MORE_AVAILABLE;
+
+/** The source reader for collections of elements. */
+public class FromElementsSourceReader<T> implements SourceReader<T, FromElementsSplit> {
+    private static final Logger LOG = LoggerFactory.getLogger(FromElementsSourceReader.class);
+
+    protected volatile int numElementsEmitted;
+    protected volatile boolean isRunning = true;
+
+    /** The context of this source reader. */
+    protected SourceReaderContext context;
+
+    protected List<T> elements;
+    protected Counter numRecordInCounter;
+
+    public FromElementsSourceReader(List<T> elements, SourceReaderContext context) {
+        this.context = context;
+        this.numElementsEmitted = 0;
+        this.elements = elements;
+        this.numRecordInCounter = context.metricGroup().getIOMetricGroup().getNumRecordsInCounter();
+    }
+
+    @Override
+    public void start() {}
+
+    @Override
+    public InputStatus pollNext(ReaderOutput<T> output) throws Exception {
+        if (isRunning && numElementsEmitted < elements.size()) {
+            output.collect(elements.get(numElementsEmitted));
+            numElementsEmitted++;
+            numRecordInCounter.inc();
+            return MORE_AVAILABLE;
+        }
+        return InputStatus.END_OF_INPUT;
+    }
+
+    @Override
+    public List<FromElementsSplit> snapshotState(long checkpointId) {
+        return Arrays.asList(new FromElementsSplit(numElementsEmitted));
+    }
+
+    @Override
+    public CompletableFuture<Void> isAvailable() {
+        CompletableFuture<Void> future = new CompletableFuture<>();
+        future.complete(null);
+        return future;

Review comment:
       return CompletableFuture.completedFuture(null);

##########
File path: flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/src/test/java/org/apache/flink/tests/util/kafka/KafkaSinkE2ECase.java
##########
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.tests.util.kafka;
+
+import org.apache.flink.connector.kafka.sink.testutils.KafkaSinkExternalContextFactory;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.external.DefaultContainerizedExternalSystem;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.junit.annotations.TestContext;
+import org.apache.flink.connector.testframe.junit.annotations.TestEnv;
+import org.apache.flink.connector.testframe.junit.annotations.TestExternalSystem;
+import org.apache.flink.connector.testframe.junit.annotations.TestSemantics;
+import org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.tests.util.TestUtils;
+import org.apache.flink.tests.util.flink.FlinkContainerTestEnvironment;
+import org.apache.flink.util.DockerImageVersions;
+
+import org.junit.jupiter.api.Disabled;
+import org.junit.jupiter.api.TestTemplate;
+import org.testcontainers.containers.KafkaContainer;
+import org.testcontainers.utility.DockerImageName;
+
+import java.util.Arrays;
+
+/** Kafka sink E2E test based on connector testing framework. */
+@SuppressWarnings("unused")
+public class KafkaSinkE2ECase extends SinkTestSuiteBase<String> {
+    private static final String KAFKA_HOSTNAME = "kafka";
+
+    @TestSemantics
+    CheckpointingMode[] semantics =
+            new CheckpointingMode[] {
+                CheckpointingMode.EXACTLY_ONCE, CheckpointingMode.AT_LEAST_ONCE
+            };
+
+    // Defines TestEnvironment
+    @TestEnv FlinkContainerTestEnvironment flink = new FlinkContainerTestEnvironment(1, 6);
+
+    // Defines ConnectorExternalSystem
+    @TestExternalSystem
+    DefaultContainerizedExternalSystem<KafkaContainer> kafka =
+            DefaultContainerizedExternalSystem.builder()
+                    .fromContainer(
+                            new KafkaContainer(DockerImageName.parse(DockerImageVersions.KAFKA))
+                                    .withNetworkAliases(KAFKA_HOSTNAME))
+                    .bindWithFlinkContainer(flink.getFlinkContainers().getJobManager())
+                    .build();
+
+    // Defines 2 External context Factories, so test cases will be invoked twice using these two
+    // kinds of external contexts.
+    @SuppressWarnings("unused")
+    @TestContext
+    KafkaSinkExternalContextFactory contextFactory =
+            new KafkaSinkExternalContextFactory(
+                    kafka.getContainer(),
+                    Arrays.asList(
+                            TestUtils.getResource("kafka-connector.jar")
+                                    .toAbsolutePath()
+                                    .toUri()
+                                    .toURL(),
+                            TestUtils.getResource("kafka-clients.jar")
+                                    .toAbsolutePath()
+                                    .toUri()
+                                    .toURL(),
+                            TestUtils.getResource("flink-connector-testing.jar")
+                                    .toAbsolutePath()
+                                    .toUri()
+                                    .toURL()));
+
+    public KafkaSinkE2ECase() throws Exception {}
+
+    /**
+     * Disable the metric test because of the metric
+     * bug(https://issues.apache.org/jira/browse/FLINK-26126).
+     */
+    @Disabled

Review comment:
       ```suggestion
         @Disabled("Skip metric test until FLINK-26126 fixed")
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSourceReader.java
##########
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.ReaderOutput;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.core.io.InputStatus;
+import org.apache.flink.metrics.Counter;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.core.io.InputStatus.MORE_AVAILABLE;
+
+/** The source reader for collections of elements. */
+public class FromElementsSourceReader<T> implements SourceReader<T, FromElementsSplit> {
+    private static final Logger LOG = LoggerFactory.getLogger(FromElementsSourceReader.class);
+
+    protected volatile int numElementsEmitted;
+    protected volatile boolean isRunning = true;
+
+    /** The context of this source reader. */
+    protected SourceReaderContext context;
+
+    protected List<T> elements;
+    protected Counter numRecordInCounter;
+
+    public FromElementsSourceReader(List<T> elements, SourceReaderContext context) {
+        this.context = context;
+        this.numElementsEmitted = 0;
+        this.elements = elements;
+        this.numRecordInCounter = context.metricGroup().getIOMetricGroup().getNumRecordsInCounter();
+    }
+
+    @Override
+    public void start() {}
+
+    @Override
+    public InputStatus pollNext(ReaderOutput<T> output) throws Exception {
+        if (isRunning && numElementsEmitted < elements.size()) {
+            output.collect(elements.get(numElementsEmitted));
+            numElementsEmitted++;
+            numRecordInCounter.inc();
+            return MORE_AVAILABLE;
+        }
+        return InputStatus.END_OF_INPUT;
+    }
+
+    @Override
+    public List<FromElementsSplit> snapshotState(long checkpointId) {
+        return Arrays.asList(new FromElementsSplit(numElementsEmitted));
+    }
+
+    @Override
+    public CompletableFuture<Void> isAvailable() {
+        CompletableFuture<Void> future = new CompletableFuture<>();
+        future.complete(null);
+        return future;
+    }
+
+    @Override
+    public void addSplits(List<FromElementsSplit> splits) {
+        numElementsEmitted = splits.get(0).getEmitNum();
+        LOG.info("ListSourceReader restores from {}.", numElementsEmitted);
+    }
+
+    @Override
+    public void notifyNoMoreSplits() {}
+
+    @Override
+    public void close() throws Exception {
+        isRunning = false;
+    }
+
+    @Override
+    public void notifyCheckpointComplete(long checkpointId) throws Exception {
+        LOG.info("{} checkpoint finished.", checkpointId);

Review comment:
       minor:
   ```suggestion
           LOG.info("checkpoint {} finished.", checkpointId);
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSourceReader.java
##########
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.ReaderOutput;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.core.io.InputStatus;
+import org.apache.flink.metrics.Counter;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.core.io.InputStatus.MORE_AVAILABLE;
+
+/** The source reader for collections of elements. */
+public class FromElementsSourceReader<T> implements SourceReader<T, FromElementsSplit> {
+    private static final Logger LOG = LoggerFactory.getLogger(FromElementsSourceReader.class);
+
+    protected volatile int numElementsEmitted;
+    protected volatile boolean isRunning = true;
+
+    /** The context of this source reader. */
+    protected SourceReaderContext context;
+
+    protected List<T> elements;
+    protected Counter numRecordInCounter;
+
+    public FromElementsSourceReader(List<T> elements, SourceReaderContext context) {
+        this.context = context;
+        this.numElementsEmitted = 0;
+        this.elements = elements;
+        this.numRecordInCounter = context.metricGroup().getIOMetricGroup().getNumRecordsInCounter();
+    }
+
+    @Override
+    public void start() {}
+
+    @Override
+    public InputStatus pollNext(ReaderOutput<T> output) throws Exception {
+        if (isRunning && numElementsEmitted < elements.size()) {
+            output.collect(elements.get(numElementsEmitted));
+            numElementsEmitted++;
+            numRecordInCounter.inc();
+            return MORE_AVAILABLE;
+        }
+        return InputStatus.END_OF_INPUT;
+    }
+
+    @Override
+    public List<FromElementsSplit> snapshotState(long checkpointId) {
+        return Arrays.asList(new FromElementsSplit(numElementsEmitted));
+    }
+
+    @Override
+    public CompletableFuture<Void> isAvailable() {
+        CompletableFuture<Void> future = new CompletableFuture<>();
+        future.complete(null);
+        return future;
+    }
+
+    @Override
+    public void addSplits(List<FromElementsSplit> splits) {
+        numElementsEmitted = splits.get(0).getEmitNum();
+        LOG.info("ListSourceReader restores from {}.", numElementsEmitted);

Review comment:
       ```suggestion
           LOG.info("FromElementsSourceReader restores from {}.", numElementsEmitted);
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/split/FromElementsSplitSerializer.java
##########
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source.split;
+
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+
+/** The split serializer for the list source. */

Review comment:
       ```suggestion
   /** The split serializer for the {@link FromElementsSource}. */
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSourceReader.java
##########
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.ReaderOutput;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.core.io.InputStatus;
+import org.apache.flink.metrics.Counter;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.core.io.InputStatus.MORE_AVAILABLE;
+
+/** The source reader for collections of elements. */
+public class FromElementsSourceReader<T> implements SourceReader<T, FromElementsSplit> {
+    private static final Logger LOG = LoggerFactory.getLogger(FromElementsSourceReader.class);
+
+    protected volatile int numElementsEmitted;
+    protected volatile boolean isRunning = true;
+
+    /** The context of this source reader. */
+    protected SourceReaderContext context;
+
+    protected List<T> elements;
+    protected Counter numRecordInCounter;
+
+    public FromElementsSourceReader(List<T> elements, SourceReaderContext context) {
+        this.context = context;
+        this.numElementsEmitted = 0;
+        this.elements = elements;
+        this.numRecordInCounter = context.metricGroup().getIOMetricGroup().getNumRecordsInCounter();
+    }
+
+    @Override
+    public void start() {}
+
+    @Override
+    public InputStatus pollNext(ReaderOutput<T> output) throws Exception {
+        if (isRunning && numElementsEmitted < elements.size()) {

Review comment:
       `emittedNum` ?

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSourceReaderWithSuccessNum.java
##########
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.ReaderOutput;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.core.io.InputStatus;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.List;
+
+import static org.apache.flink.core.io.InputStatus.MORE_AVAILABLE;
+
+/** The reader reads data from a list. */

Review comment:
       Add more note to explain the diff with  `FromElementsSourceReader`, how about the name `FromLimitedElementsSourceReader` and `int limitedNum` ?

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.

Review comment:
       ```suggestion
        * Test connector sink restart from a completed savepoint with a higher parallelism.
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */

Review comment:
       ```suggestion
       /**
        * Test DataStream connector sink.
        *
        * <p>The following tests will create a sink in the external system, generate a collection of test data
        * and write them to this sink by the Flink Job.
        *
        * <p>In order to pass these tests, the number of records produced by Flink need to be equals to
        * the generated test data. And the records in the sink will be compared to the test data by the
        * different semantics. There's no requirement for records order.
        */
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.

Review comment:
       ```suggestion
        * Test connector sink restart from a completed savepoint with a lower parallelism.
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/external/sink/DataStreamSinkV1ExternalContext.java
##########
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.external.sink;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.connector.sink.Sink;
+
+/**
+ * External context for DataStream sinks which is sink version 1.

Review comment:
       BTW, do we have any test for v1 sink.

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+
+        /**
+         * The job should stop after consume a specified number of records. In order to know when
+         * the specified number of records have been consumed, a collect sink is need to be watched.
+         */
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;

Review comment:
       ```suggestion
           String savepointPath;
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+
+        /**
+         * The job should stop after consume a specified number of records. In order to know when
+         * the specified number of records have been consumed, a collect sink is need to be watched.
+         */
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;
+        try {
+            final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());
+            waitForAllTaskRunning(
+                    () ->
+                            queryRestClient.getJobDetails(
+                                    testEnv.getRestEndpoint(), jobClient.getJobID()),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+            waitExpectedSizeData(iterator, numBeforeSuccess);
+
+            savepointDir =
+                    jobClient
+                            .stopWithSavepoint(
+                                    true, testEnv.getCheckpointUri(), SavepointFormatType.CANONICAL)
+                            .get(30, TimeUnit.SECONDS);
+            waitForJobStatus(
+                    jobClient,
+                    Collections.singletonList(JobStatus.FINISHED),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+        } catch (Exception e) {
+            killJob(jobClient);
+            throw e;
+        }
+
+        List<T> target = testRecords.subList(0, numBeforeSuccess);
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), target, semantic);
+
+        // Step 4: restart the Flink job with the savepoint
+        final StreamExecutionEnvironment restartEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .setSavepointRestorePath(savepointDir)
+                                .build());
+        restartEnv.enableCheckpointing(50);
+
+        DataStreamSource<T> restartSource =
+                restartEnv
+                        .fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        testRecords.size()),
+                                WatermarkStrategy.noWatermarks(),
+                                "restartSource")
+                        .setParallelism(1);
+
+        DataStream<T> sinkStream = restartSource.returns(externalContext.getProducedType());
+        tryCreateSink(sinkStream, externalContext, sinkSettings).setParallelism(afterParallelism);
+        addCollectSink(restartSource);
+        final JobClient restartJobClient = restartEnv.executeAsync("Restart Test");
+
+        try {
+            // Check the result
+            checkResultWithSemantic(
+                    externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+        } finally {
+            killJob(restartJobClient);
+            iterator.close();
+        }
+    }
+
+    /**
+     * Test connector sink metrics.
+     *
+     * <p>This test will create a sink in the external system, generate test data and write them to
+     * the sink via a Flink job. Then read and compare the metrics.
+     *
+     * <p>Now test: numRecordsOut
+     */
+    @TestTemplate
+    @DisplayName("Test sink metrics")
+    public void testMetrics(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        int parallelism = 1;
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // make sure use different names when executes multi times
+        String sinkName = "metricTestSink" + testRecords.hashCode();
+        final StreamExecutionEnvironment env =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        env.enableCheckpointing(50);
+
+        DataStreamSource<T> source =
+                env.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        testRecords.size()),
+                                WatermarkStrategy.noWatermarks(),
+                                "metricTestSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name(sinkName)
+                .setParallelism(parallelism);
+        final JobClient jobClient = env.executeAsync("Metrics Test");
+        final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());
+        try {
+            waitForAllTaskRunning(
+                    () ->
+                            queryRestClient.getJobDetails(
+                                    testEnv.getRestEndpoint(), jobClient.getJobID()),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+            waitUntilCondition(
+                    () -> {
+                        // test metrics
+                        try {
+                            return compareSinkMetrics(
+                                    queryRestClient,
+                                    testEnv,
+                                    externalContext,
+                                    jobClient.getJobID(),
+                                    sinkName,
+                                    testRecords.size());
+                        } catch (Exception e) {
+                            // skip failed assert try
+                            return false;
+                        }
+                    },
+                    Deadline.fromNow(DEFAULT_COLLECT_DATA_TIMEOUT));
+        } finally {
+            // Clean up
+            killJob(jobClient);
+        }
+    }
+
+    // ----------------------------- Helper Functions ---------------------------------
+
+    /**
+     * Generate a set of test records.
+     *
+     * @param testingSinkSettings sink settings
+     * @param externalContext External context
+     * @return Collection of generated test records
+     */
+    protected List<T> generateTestData(
+            TestingSinkSettings testingSinkSettings,
+            DataStreamSinkExternalContext<T> externalContext) {
+        return externalContext.generateTestData(
+                testingSinkSettings, ThreadLocalRandom.current().nextLong());
+    }
+
+    /**
+     * Poll records from the sink.
+     *
+     * @param result Append records to which list
+     * @param reader The sink reader
+     * @param expected The expected list which help to stop polling
+     * @param retryTimes The retry times
+     * @param semantic The semantic
+     * @return Collection of records in the Sink
+     */
+    private List<T> appendResultData(
+            List<T> result,
+            ExternalSystemDataReader<T> reader,
+            List<T> expected,
+            int retryTimes,
+            CheckpointingMode semantic) {
+        long timeoutMs = 1000L;
+        int retryIndex = 0;
+
+        while (retryIndex++ < retryTimes
+                && !checkGetEnoughRecordsWithSemantic(expected, result, semantic)) {
+            result.addAll(reader.poll(Duration.ofMillis(timeoutMs)));
+        }
+        return result;
+    }
+
+    /**
+     * Check whether the polling should stop.
+     *
+     * @param expected The expected list which help to stop polling
+     * @param result The records that have been read
+     * @param semantic The semantic
+     * @return Whether the polling should stop
+     */
+    private boolean checkGetEnoughRecordsWithSemantic(
+            List<T> expected, List<T> result, CheckpointingMode semantic) {
+        checkNotNull(expected);
+        checkNotNull(result);
+        if (EXACTLY_ONCE.equals(semantic)) {
+            return expected.size() <= result.size();
+        } else if (AT_LEAST_ONCE.equals(semantic)) {
+            Set<Integer> matchedIndex = new HashSet<>();
+            for (T record : expected) {
+                int before = matchedIndex.size();
+                for (int i = 0; i < result.size(); i++) {
+                    if (matchedIndex.contains(i)) {
+                        continue;
+                    }
+                    if (record.equals(result.get(i))) {
+                        matchedIndex.add(i);
+                        break;
+                    }
+                }
+                // if not find the record in the result
+                if (before == matchedIndex.size()) {
+                    return false;
+                }
+            }
+            return true;
+        }
+        throw new IllegalStateException(
+                String.format("%s delivery guarantee doesn't support test.", semantic.name()));
+    }
+
+    /**
+     * Compare the test data with the result.
+     *
+     * @param reader the data reader for the sink
+     * @param testData the test data
+     * @param semantic the supported semantic, see {@link CheckpointingMode}
+     */
+    private void checkResultWithSemantic(
+            ExternalSystemDataReader<T> reader, List<T> testData, CheckpointingMode semantic)
+            throws Exception {
+        final ArrayList<T> result = new ArrayList<>();
+        waitUntilCondition(
+                () -> {
+                    appendResultData(result, reader, testData, 30, semantic);
+                    try {
+                        CollectIteratorAssertions.assertThat(sort(result).iterator())
+                                .matchesRecordsFromSource(Arrays.asList(sort(testData)), semantic);
+                        return true;
+                    } catch (Throwable t) {
+                        return false;
+                    }
+                },
+                Deadline.fromNow(DEFAULT_COLLECT_DATA_TIMEOUT));
+    }
+
+    /** Compare the metrics. */
+    private boolean compareSinkMetrics(
+            MetricQuerier metricQuerier,
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> context,
+            JobID jobId,
+            String sinkName,
+            long allRecordSize)
+            throws Exception {
+        double sumNumRecordsOut =
+                metricQuerier.getAggregatedMetricsByRestAPI(
+                        testEnv.getRestEndpoint(),
+                        jobId,
+                        sinkName,
+                        MetricNames.IO_NUM_RECORDS_OUT,
+                        getSinkMetricFilter(context));
+        return Precision.equals(allRecordSize, sumNumRecordsOut);
+    }
+
+    /** Sort the list. */
+    private List<T> sort(List<T> list) {
+        return list.stream().sorted().collect(Collectors.toList());
+    }
+
+    private TestingSinkSettings getTestingSinkSettings(CheckpointingMode checkpointingMode) {
+        return TestingSinkSettings.builder().setCheckpointingMode(checkpointingMode).build();
+    }
+
+    private void killJob(JobClient jobClient) throws Exception {
+        terminateJob(jobClient);
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.CANCELED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+    }
+
+    private DataStreamSink<T> tryCreateSink(
+            DataStream<T> dataStream,
+            DataStreamSinkExternalContext<T> context,
+            TestingSinkSettings sinkSettings) {
+        try {
+            if (context instanceof DataStreamSinkV1ExternalContext) {
+                org.apache.flink.api.connector.sink.Sink<T, ?, ?, ?> sinkV1 =
+                        ((DataStreamSinkV1ExternalContext<T>) context).createSink(sinkSettings);
+                return dataStream.sinkTo(sinkV1);
+            } else if (context instanceof DataStreamSinkV2ExternalContext) {
+                Sink<T> sinkV2 =
+                        ((DataStreamSinkV2ExternalContext<T>) context).createSink(sinkSettings);
+                return dataStream.sinkTo(sinkV2);
+            } else {
+                throw new IllegalArgumentException(
+                        String.format("Get unexpected sink context: %s", context.getClass()));

Review comment:
       hint, please use the log pattern: The supported are ... , but actual is ..

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+
+        /**
+         * The job should stop after consume a specified number of records. In order to know when
+         * the specified number of records have been consumed, a collect sink is need to be watched.
+         */
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;
+        try {
+            final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());

Review comment:
       make sense to me @ruanhang1993 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] zentol commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r807958833



##########
File path: flink-test-utils-parent/flink-connector-test-utils/pom.xml
##########
@@ -95,4 +95,30 @@
 			<scope>compile</scope>
 		</dependency>
 	</dependencies>
+
+	<build>
+		<plugins>
+			<plugin>
+				<groupId>org.apache.maven.plugins</groupId>
+				<artifactId>maven-shade-plugin</artifactId>
+				<executions>
+					<execution>
+						<phase>package</phase>
+						<goals>
+							<goal>shade</goal>
+						</goals>
+						<configuration>
+							<shadedArtifactAttached>true</shadedArtifactAttached>
+							<shadedClassifierName>source</shadedClassifierName>
+							<artifactSet>
+								<includes>
+									<include>**/connector/testframe/source/**</include>

Review comment:
       As in, why do we need to create a separate jar when the normal jar would also do the trick?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "48dd16592a335d4298e0aa08b9bb89d4cc72994d",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "48dd16592a335d4298e0aa08b9bb89d4cc72994d",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466) 
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   * 48dd16592a335d4298e0aa08b9bb89d4cc72994d UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518",
       "triggerID" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466) 
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   * c1619577228a3fde9684f2c85965d6d1f76addbf Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518",
       "triggerID" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   * c1619577228a3fde9684f2c85965d6d1f76addbf Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518) 
   * 1076a64c9f916fe9d8a23d38aafbd1f359b038d9 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518",
       "triggerID" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31529",
       "triggerID" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "ebca9a1e955205c53ea919b863c9550642bc73db",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31536",
       "triggerID" : "ebca9a1e955205c53ea919b863c9550642bc73db",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   * c1619577228a3fde9684f2c85965d6d1f76addbf Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518) 
   * 1076a64c9f916fe9d8a23d38aafbd1f359b038d9 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31529) 
   * ebca9a1e955205c53ea919b863c9550642bc73db Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31536) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518",
       "triggerID" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31529",
       "triggerID" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "ebca9a1e955205c53ea919b863c9550642bc73db",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31536",
       "triggerID" : "ebca9a1e955205c53ea919b863c9550642bc73db",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1076a64c9f916fe9d8a23d38aafbd1f359b038d9 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31529) 
   * ebca9a1e955205c53ea919b863c9550642bc73db Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31536) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466) 
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] leonardBang closed pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
leonardBang closed pull request #18496:
URL: https://github.com/apache/flink/pull/18496


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0034fb25f7fbbbcf302fb18626d7983f32732ca5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339) 
   * b8513c81bd9bc1e30efa4ea1fae35d30fd33472c Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] ruanhang1993 commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
ruanhang1993 commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r806639211



##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/external/sink/DataStreamSinkV1ExternalContext.java
##########
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.external.sink;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.connector.sink.Sink;
+
+/**
+ * External context for DataStream sinks which is sink version 1.

Review comment:
       No, this PR only add the tests for the Kafka connector.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "48dd16592a335d4298e0aa08b9bb89d4cc72994d",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "48dd16592a335d4298e0aa08b9bb89d4cc72994d",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466) 
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   * 48dd16592a335d4298e0aa08b9bb89d4cc72994d UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] leonardBang commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
leonardBang commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r806729035



##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+
+        /**
+         * The job should stop after consume a specified number of records. In order to know when
+         * the specified number of records have been consumed, a collect sink is need to be watched.
+         */
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;
+        try {
+            final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());

Review comment:
       make sense to me @ruanhang1993 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] ruanhang1993 commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
ruanhang1993 commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r806639211



##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/external/sink/DataStreamSinkV1ExternalContext.java
##########
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.external.sink;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.connector.sink.Sink;
+
+/**
+ * External context for DataStream sinks which is sink version 1.

Review comment:
       No, this PR only add the tests for the Kafka connector.

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromLimitedElementsSourceReader.java
##########
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.ReaderOutput;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.core.io.InputStatus;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.List;
+
+import static org.apache.flink.core.io.InputStatus.MORE_AVAILABLE;
+
+/**
+ * A {@link SourceReader} implementation that reads data from a list. This source reader will stop
+ * reading at the given position and wait until the checkpoint or savepoint triggered.
+ *
+ * <p>This source reader is used when {@link FromElementsSource} creates readers with a fixed
+ * position.
+ */
+public class FromLimitedElementsSourceReader<T> extends FromElementsSourceReader<T> {

Review comment:
       fixed

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+
+        /**
+         * The job should stop after consume a specified number of records. In order to know when
+         * the specified number of records have been consumed, a collect sink is need to be watched.
+         */
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;
+        try {
+            final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());

Review comment:
       fixed

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSource.java
##########
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.Source;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.api.connector.source.SplitEnumeratorContext;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumState;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumStateSerializer;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumerator;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplitSerializer;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+import org.apache.flink.util.Preconditions;
+
+import java.util.List;
+
+/**
+ * A {@link Source} implementation that reads data from a list and stops reading at the fixed
+ * position. The source will wait until the checkpoint or savepoint triggered, the source is useful
+ * for connector tests.
+ *
+ * <p>Note: This parallelism of source must be 1.
+ */
+public class FromElementsSource<OUT> implements Source<OUT, FromElementsSplit, NoOpEnumState> {

Review comment:
       This will need another PR.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] ruanhang1993 commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
ruanhang1993 commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r806806902



##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSource.java
##########
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.Source;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.api.connector.source.SplitEnumeratorContext;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumState;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumStateSerializer;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumerator;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplitSerializer;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+import org.apache.flink.util.Preconditions;
+
+import java.util.List;
+
+/**
+ * A {@link Source} implementation that reads data from a list and stops reading at the fixed
+ * position. The source will wait until the checkpoint or savepoint triggered, the source is useful
+ * for connector tests.
+ *
+ * <p>Note: This parallelism of source must be 1.
+ */
+public class FromElementsSource<OUT> implements Source<OUT, FromElementsSplit, NoOpEnumState> {

Review comment:
       This will need another PR.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d5e64bbb6debad7940d7ca05729ce57628127225 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474) 
   * 0034fb25f7fbbbcf302fb18626d7983f32732ca5 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466) 
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518",
       "triggerID" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31529",
       "triggerID" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * da588603a577a2b26bcf90fcd38653f7ec8a3a74 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477) 
   * 64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505) 
   * c1619577228a3fde9684f2c85965d6d1f76addbf Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518) 
   * 1076a64c9f916fe9d8a23d38aafbd1f359b038d9 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31529) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   * 0b52c13271c485e5a6776a1aca81c753d0d4bbc4 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] leonardBang commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
leonardBang commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r805352388



##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaDataReader.java
##########
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+
+import org.apache.kafka.clients.consumer.ConsumerRecord;
+import org.apache.kafka.clients.consumer.ConsumerRecords;
+import org.apache.kafka.clients.consumer.KafkaConsumer;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.errors.WakeupException;
+
+import java.time.Duration;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Properties;
+
+/** Kafka dataStream data reader. */
+public class KafkaDataReader implements ExternalSystemDataReader<String> {
+    private final KafkaConsumer<String, String> consumer;
+
+    public KafkaDataReader(Properties properties, Collection<TopicPartition> partitions) {
+        this.consumer = new KafkaConsumer<>(properties);
+        consumer.assign(partitions);
+        consumer.seekToBeginning(partitions);
+    }
+
+    @Override
+    public List<String> poll(Duration timeout) {
+        List<String> result = new LinkedList<>();
+        ConsumerRecords<String, String> consumerRecords;
+        try {
+            consumerRecords = consumer.poll(timeout);
+        } catch (WakeupException we) {
+            return Collections.emptyList();
+        }
+        Iterator<ConsumerRecord<String, String>> iterator = consumerRecords.iterator();
+        while (iterator.hasNext()) {
+            result.add(iterator.next().value());
+        }
+        return result;
+    }
+
+    @Override
+    public void close() throws Exception {
+        consumer.close();

Review comment:
       hint: check null before release resource

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;

Review comment:
       ```suggestion
       private static final long DEFAULT_TIMEOUT = 30L;
   ```

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;
+    private static final int NUM_RECORDS_UPPER_BOUND = 500;
+    private static final int NUM_RECORDS_LOWER_BOUND = 100;
+
+    protected String bootstrapServers;
+    protected final String topicName;
+
+    private final List<ExternalSystemDataReader<String>> readers = new ArrayList<>();
+
+    protected int numSplits = 0;
+
+    private List<URL> connectorJarPaths;
+
+    protected final AdminClient kafkaAdminClient;
+
+    public KafkaSinkExternalContext(String bootstrapServers, List<URL> connectorJarPaths) {
+        this.bootstrapServers = bootstrapServers;
+        this.connectorJarPaths = connectorJarPaths;
+        this.topicName =
+                TOPIC_NAME_PREFIX + "-" + ThreadLocalRandom.current().nextLong(Long.MAX_VALUE);
+        kafkaAdminClient = createAdminClient();
+    }
+
+    protected void createTopic(String topicName, int numPartitions, short replicationFactor) {
+        LOG.debug(
+                "Creating new Kafka topic {} with {} partitions and {} replicas",
+                topicName,
+                numPartitions,
+                replicationFactor);
+        NewTopic newTopic = new NewTopic(topicName, numPartitions, replicationFactor);
+        try {
+            kafkaAdminClient
+                    .createTopics(Collections.singletonList(newTopic))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            throw new RuntimeException(String.format("Cannot create topic '%s'", topicName), e);
+        }
+    }
+
+    protected void deleteTopic(String topicName) {
+        LOG.debug("Deleting Kafka topic {}", topicName);
+        try {
+            kafkaAdminClient
+                    .deleteTopics(Collections.singletonList(topicName))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) {
+                throw new RuntimeException(String.format("Cannot delete topic '%s'", topicName), e);
+            }
+        }
+    }
+
+    private AdminClient createAdminClient() {
+        Properties config = new Properties();

Review comment:
       ```suggestion
           final Properties config = new Properties();
   ```

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;
+    private static final int NUM_RECORDS_UPPER_BOUND = 500;
+    private static final int NUM_RECORDS_LOWER_BOUND = 100;
+
+    protected String bootstrapServers;
+    protected final String topicName;
+
+    private final List<ExternalSystemDataReader<String>> readers = new ArrayList<>();
+
+    protected int numSplits = 0;
+
+    private List<URL> connectorJarPaths;
+
+    protected final AdminClient kafkaAdminClient;
+
+    public KafkaSinkExternalContext(String bootstrapServers, List<URL> connectorJarPaths) {
+        this.bootstrapServers = bootstrapServers;
+        this.connectorJarPaths = connectorJarPaths;
+        this.topicName =
+                TOPIC_NAME_PREFIX + "-" + ThreadLocalRandom.current().nextLong(Long.MAX_VALUE);
+        kafkaAdminClient = createAdminClient();
+    }
+
+    protected void createTopic(String topicName, int numPartitions, short replicationFactor) {
+        LOG.debug(
+                "Creating new Kafka topic {} with {} partitions and {} replicas",
+                topicName,
+                numPartitions,
+                replicationFactor);
+        NewTopic newTopic = new NewTopic(topicName, numPartitions, replicationFactor);
+        try {
+            kafkaAdminClient
+                    .createTopics(Collections.singletonList(newTopic))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            throw new RuntimeException(String.format("Cannot create topic '%s'", topicName), e);
+        }
+    }
+
+    protected void deleteTopic(String topicName) {
+        LOG.debug("Deleting Kafka topic {}", topicName);
+        try {
+            kafkaAdminClient
+                    .deleteTopics(Collections.singletonList(topicName))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) {
+                throw new RuntimeException(String.format("Cannot delete topic '%s'", topicName), e);
+            }
+        }
+    }
+
+    private AdminClient createAdminClient() {
+        Properties config = new Properties();
+        config.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        return AdminClient.create(config);
+    }
+
+    @Override
+    public Sink<String> createSink(TestingSinkSettings sinkSettings) {
+        if (!topicExists(topicName)) {
+            createTopic(topicName, 4, (short) 1);
+        }
+
+        KafkaSinkBuilder<String> builder = KafkaSink.builder();
+        Properties properties = new Properties();
+        properties.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, 900000);

Review comment:
       ```suggestion
           final Properties properties = new Properties();
           properties.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, DEFAULT_TRANSACTION_TIMEOUT_IN_MS);
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSource.java
##########
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.Source;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.api.connector.source.SplitEnumeratorContext;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumState;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumStateSerializer;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumerator;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplitSerializer;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+
+import java.util.List;
+
+/**
+ * The source reads data from a list and stops reading at the fixed position. The source will wait
+ * until the checkpoint or savepoint triggers.
+ *
+ * <p>Note that this source must be of parallelism 1.
+ */
+public class FromElementsSource<OUT> implements Source<OUT, FromElementsSplit, NoOpEnumState> {
+    // Boundedness
+    private Boundedness boundedness;
+
+    private List<OUT> elements;
+
+    private Integer successNum;
+
+    public FromElementsSource(List<OUT> elements) {
+        this.elements = elements;
+    }
+
+    public FromElementsSource(Boundedness boundedness, List<OUT> elements, Integer successNum) {
+        this(elements);
+        if (successNum > elements.size()) {
+            throw new RuntimeException("SuccessNum must be larger than elements' size.");

Review comment:
       ```suggestion
   Preconditions.check(successNum > elements.size(), String.format("The successNum must be larger than the elements list %d, but actual  successNum is %d", elements.size(), successNum));
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSource.java
##########
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.Source;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.api.connector.source.SplitEnumeratorContext;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumState;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumStateSerializer;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumerator;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplitSerializer;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+
+import java.util.List;
+
+/**
+ * The source reads data from a list and stops reading at the fixed position. The source will wait
+ * until the checkpoint or savepoint triggers.

Review comment:
       ```suggestion
    * A  {@link Source} implementation that reads data from a list and stops reading at the fixed position. 
    * The source will wait until the checkpoint or savepoint triggered, the source is useful for connector tests.
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/split/FromElementsSplit.java
##########
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source.split;
+
+import org.apache.flink.api.connector.source.SourceSplit;
+
+/** The split of the list source. */

Review comment:
       ```suggestion
   /** The split of {@link FromElementsSource}. */
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+
+        /**
+         * The job should stop after consume a specified number of records. In order to know when
+         * the specified number of records have been consumed, a collect sink is need to be watched.
+         */
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;
+        try {
+            final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());
+            waitForAllTaskRunning(
+                    () ->
+                            queryRestClient.getJobDetails(
+                                    testEnv.getRestEndpoint(), jobClient.getJobID()),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+            waitExpectedSizeData(iterator, numBeforeSuccess);
+
+            savepointDir =
+                    jobClient
+                            .stopWithSavepoint(
+                                    true, testEnv.getCheckpointUri(), SavepointFormatType.CANONICAL)
+                            .get(30, TimeUnit.SECONDS);
+            waitForJobStatus(
+                    jobClient,
+                    Collections.singletonList(JobStatus.FINISHED),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+        } catch (Exception e) {
+            killJob(jobClient);
+            throw e;
+        }
+
+        List<T> target = testRecords.subList(0, numBeforeSuccess);
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), target, semantic);
+
+        // Step 4: restart the Flink job with the savepoint
+        final StreamExecutionEnvironment restartEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .setSavepointRestorePath(savepointDir)
+                                .build());
+        restartEnv.enableCheckpointing(50);
+
+        DataStreamSource<T> restartSource =
+                restartEnv
+                        .fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        testRecords.size()),
+                                WatermarkStrategy.noWatermarks(),
+                                "restartSource")
+                        .setParallelism(1);
+
+        DataStream<T> sinkStream = restartSource.returns(externalContext.getProducedType());
+        tryCreateSink(sinkStream, externalContext, sinkSettings).setParallelism(afterParallelism);
+        addCollectSink(restartSource);
+        final JobClient restartJobClient = restartEnv.executeAsync("Restart Test");
+
+        try {
+            // Check the result
+            checkResultWithSemantic(
+                    externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+        } finally {
+            killJob(restartJobClient);
+            iterator.close();
+        }
+    }
+
+    /**
+     * Test connector sink metrics.
+     *
+     * <p>This test will create a sink in the external system, generate test data and write them to
+     * the sink via a Flink job. Then read and compare the metrics.
+     *
+     * <p>Now test: numRecordsOut
+     */
+    @TestTemplate
+    @DisplayName("Test sink metrics")
+    public void testMetrics(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        int parallelism = 1;
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // make sure use different names when executes multi times
+        String sinkName = "metricTestSink" + testRecords.hashCode();
+        final StreamExecutionEnvironment env =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        env.enableCheckpointing(50);
+
+        DataStreamSource<T> source =
+                env.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        testRecords.size()),
+                                WatermarkStrategy.noWatermarks(),
+                                "metricTestSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name(sinkName)
+                .setParallelism(parallelism);
+        final JobClient jobClient = env.executeAsync("Metrics Test");
+        final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());
+        try {
+            waitForAllTaskRunning(
+                    () ->
+                            queryRestClient.getJobDetails(
+                                    testEnv.getRestEndpoint(), jobClient.getJobID()),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+            waitUntilCondition(
+                    () -> {
+                        // test metrics
+                        try {
+                            return compareSinkMetrics(
+                                    queryRestClient,
+                                    testEnv,
+                                    externalContext,
+                                    jobClient.getJobID(),
+                                    sinkName,
+                                    testRecords.size());
+                        } catch (Exception e) {
+                            // skip failed assert try
+                            return false;
+                        }
+                    },
+                    Deadline.fromNow(DEFAULT_COLLECT_DATA_TIMEOUT));
+        } finally {
+            // Clean up
+            killJob(jobClient);
+        }
+    }
+
+    // ----------------------------- Helper Functions ---------------------------------
+
+    /**
+     * Generate a set of test records.
+     *
+     * @param testingSinkSettings sink settings
+     * @param externalContext External context
+     * @return Collection of generated test records
+     */
+    protected List<T> generateTestData(
+            TestingSinkSettings testingSinkSettings,
+            DataStreamSinkExternalContext<T> externalContext) {
+        return externalContext.generateTestData(
+                testingSinkSettings, ThreadLocalRandom.current().nextLong());
+    }
+
+    /**
+     * Poll records from the sink.
+     *
+     * @param result Append records to which list
+     * @param reader The sink reader
+     * @param expected The expected list which help to stop polling
+     * @param retryTimes The retry times
+     * @param semantic The semantic
+     * @return Collection of records in the Sink
+     */
+    private List<T> appendResultData(

Review comment:
       The method name is different with the method java document

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+
+        /**
+         * The job should stop after consume a specified number of records. In order to know when
+         * the specified number of records have been consumed, a collect sink is need to be watched.
+         */
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;
+        try {
+            final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());
+            waitForAllTaskRunning(
+                    () ->
+                            queryRestClient.getJobDetails(
+                                    testEnv.getRestEndpoint(), jobClient.getJobID()),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+            waitExpectedSizeData(iterator, numBeforeSuccess);
+
+            savepointDir =
+                    jobClient
+                            .stopWithSavepoint(
+                                    true, testEnv.getCheckpointUri(), SavepointFormatType.CANONICAL)
+                            .get(30, TimeUnit.SECONDS);
+            waitForJobStatus(
+                    jobClient,
+                    Collections.singletonList(JobStatus.FINISHED),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+        } catch (Exception e) {
+            killJob(jobClient);
+            throw e;
+        }
+
+        List<T> target = testRecords.subList(0, numBeforeSuccess);
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), target, semantic);
+
+        // Step 4: restart the Flink job with the savepoint
+        final StreamExecutionEnvironment restartEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .setSavepointRestorePath(savepointDir)
+                                .build());
+        restartEnv.enableCheckpointing(50);
+
+        DataStreamSource<T> restartSource =
+                restartEnv
+                        .fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        testRecords.size()),
+                                WatermarkStrategy.noWatermarks(),
+                                "restartSource")
+                        .setParallelism(1);
+
+        DataStream<T> sinkStream = restartSource.returns(externalContext.getProducedType());
+        tryCreateSink(sinkStream, externalContext, sinkSettings).setParallelism(afterParallelism);
+        addCollectSink(restartSource);
+        final JobClient restartJobClient = restartEnv.executeAsync("Restart Test");
+
+        try {
+            // Check the result
+            checkResultWithSemantic(
+                    externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+        } finally {
+            killJob(restartJobClient);
+            iterator.close();
+        }
+    }
+
+    /**
+     * Test connector sink metrics.
+     *
+     * <p>This test will create a sink in the external system, generate test data and write them to
+     * the sink via a Flink job. Then read and compare the metrics.
+     *
+     * <p>Now test: numRecordsOut
+     */
+    @TestTemplate
+    @DisplayName("Test sink metrics")
+    public void testMetrics(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        int parallelism = 1;
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // make sure use different names when executes multi times
+        String sinkName = "metricTestSink" + testRecords.hashCode();
+        final StreamExecutionEnvironment env =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        env.enableCheckpointing(50);
+
+        DataStreamSource<T> source =
+                env.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        testRecords.size()),
+                                WatermarkStrategy.noWatermarks(),
+                                "metricTestSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name(sinkName)
+                .setParallelism(parallelism);
+        final JobClient jobClient = env.executeAsync("Metrics Test");
+        final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());
+        try {
+            waitForAllTaskRunning(
+                    () ->
+                            queryRestClient.getJobDetails(
+                                    testEnv.getRestEndpoint(), jobClient.getJobID()),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+            waitUntilCondition(
+                    () -> {
+                        // test metrics
+                        try {
+                            return compareSinkMetrics(
+                                    queryRestClient,
+                                    testEnv,
+                                    externalContext,
+                                    jobClient.getJobID(),
+                                    sinkName,
+                                    testRecords.size());
+                        } catch (Exception e) {
+                            // skip failed assert try
+                            return false;
+                        }
+                    },
+                    Deadline.fromNow(DEFAULT_COLLECT_DATA_TIMEOUT));
+        } finally {
+            // Clean up
+            killJob(jobClient);
+        }
+    }
+
+    // ----------------------------- Helper Functions ---------------------------------
+
+    /**
+     * Generate a set of test records.
+     *
+     * @param testingSinkSettings sink settings
+     * @param externalContext External context
+     * @return Collection of generated test records
+     */
+    protected List<T> generateTestData(
+            TestingSinkSettings testingSinkSettings,
+            DataStreamSinkExternalContext<T> externalContext) {
+        return externalContext.generateTestData(
+                testingSinkSettings, ThreadLocalRandom.current().nextLong());
+    }
+
+    /**
+     * Poll records from the sink.
+     *
+     * @param result Append records to which list
+     * @param reader The sink reader
+     * @param expected The expected list which help to stop polling
+     * @param retryTimes The retry times
+     * @param semantic The semantic
+     * @return Collection of records in the Sink
+     */
+    private List<T> appendResultData(
+            List<T> result,
+            ExternalSystemDataReader<T> reader,
+            List<T> expected,
+            int retryTimes,
+            CheckpointingMode semantic) {
+        long timeoutMs = 1000L;
+        int retryIndex = 0;
+
+        while (retryIndex++ < retryTimes
+                && !checkGetEnoughRecordsWithSemantic(expected, result, semantic)) {
+            result.addAll(reader.poll(Duration.ofMillis(timeoutMs)));
+        }
+        return result;
+    }
+
+    /**
+     * Check whether the polling should stop.
+     *
+     * @param expected The expected list which help to stop polling
+     * @param result The records that have been read
+     * @param semantic The semantic
+     * @return Whether the polling should stop
+     */
+    private boolean checkGetEnoughRecordsWithSemantic(
+            List<T> expected, List<T> result, CheckpointingMode semantic) {
+        checkNotNull(expected);
+        checkNotNull(result);
+        if (EXACTLY_ONCE.equals(semantic)) {
+            return expected.size() <= result.size();
+        } else if (AT_LEAST_ONCE.equals(semantic)) {
+            Set<Integer> matchedIndex = new HashSet<>();
+            for (T record : expected) {
+                int before = matchedIndex.size();
+                for (int i = 0; i < result.size(); i++) {
+                    if (matchedIndex.contains(i)) {
+                        continue;
+                    }
+                    if (record.equals(result.get(i))) {
+                        matchedIndex.add(i);
+                        break;
+                    }
+                }
+                // if not find the record in the result
+                if (before == matchedIndex.size()) {
+                    return false;
+                }
+            }
+            return true;
+        }
+        throw new IllegalStateException(
+                String.format("%s delivery guarantee doesn't support test.", semantic.name()));
+    }
+
+    /**
+     * Compare the test data with the result.

Review comment:
       ```suggestion
        * Compare the test data with actual data in given semantic.
   ```

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;
+    private static final int NUM_RECORDS_UPPER_BOUND = 500;
+    private static final int NUM_RECORDS_LOWER_BOUND = 100;
+
+    protected String bootstrapServers;
+    protected final String topicName;
+
+    private final List<ExternalSystemDataReader<String>> readers = new ArrayList<>();
+
+    protected int numSplits = 0;
+
+    private List<URL> connectorJarPaths;
+
+    protected final AdminClient kafkaAdminClient;
+
+    public KafkaSinkExternalContext(String bootstrapServers, List<URL> connectorJarPaths) {
+        this.bootstrapServers = bootstrapServers;
+        this.connectorJarPaths = connectorJarPaths;
+        this.topicName =
+                TOPIC_NAME_PREFIX + "-" + ThreadLocalRandom.current().nextLong(Long.MAX_VALUE);
+        kafkaAdminClient = createAdminClient();
+    }
+
+    protected void createTopic(String topicName, int numPartitions, short replicationFactor) {
+        LOG.debug(
+                "Creating new Kafka topic {} with {} partitions and {} replicas",
+                topicName,
+                numPartitions,
+                replicationFactor);
+        NewTopic newTopic = new NewTopic(topicName, numPartitions, replicationFactor);
+        try {
+            kafkaAdminClient
+                    .createTopics(Collections.singletonList(newTopic))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            throw new RuntimeException(String.format("Cannot create topic '%s'", topicName), e);
+        }
+    }
+
+    protected void deleteTopic(String topicName) {
+        LOG.debug("Deleting Kafka topic {}", topicName);
+        try {
+            kafkaAdminClient
+                    .deleteTopics(Collections.singletonList(topicName))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) {
+                throw new RuntimeException(String.format("Cannot delete topic '%s'", topicName), e);

Review comment:
       ```suggestion
                   throw new RuntimeException(String.format("Cannot delete unknown Kafka topic '%s'", topicName), e);
   ```

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContextFactory.java
##########
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.connector.testframe.external.ExternalContextFactory;
+
+import org.testcontainers.containers.KafkaContainer;
+
+import java.net.URL;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/** Kafka table sink external context factory. */

Review comment:
       typo: Kafka table ?

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/KafkaSinkITCase.java
##########
@@ -161,6 +177,49 @@ public void tearDown() throws ExecutionException, InterruptedException, TimeoutE
         deleteTestTopic(topic);
     }
 
+    /** Integration test based on connector testing framework. */
+    @Nested
+    class IntegrationTests extends SinkTestSuiteBase<String> {
+        // Defines test environment on Flink MiniCluster
+        @SuppressWarnings("unused")
+        @TestEnv
+        MiniClusterTestEnvironment flink = new MiniClusterTestEnvironment();
+
+        // Defines external system
+        @TestExternalSystem
+        DefaultContainerizedExternalSystem<KafkaContainer> kafka =
+                DefaultContainerizedExternalSystem.builder()
+                        .fromContainer(
+                                new KafkaContainer(
+                                        DockerImageName.parse(DockerImageVersions.KAFKA)))
+                        .build();
+
+        @SuppressWarnings("unused")
+        @TestSemantics
+        CheckpointingMode[] semantics =
+                new CheckpointingMode[] {
+                    CheckpointingMode.EXACTLY_ONCE, CheckpointingMode.AT_LEAST_ONCE
+                };
+
+        @SuppressWarnings("unused")
+        @TestContext
+        KafkaSinkExternalContextFactory sinkContext =
+                new KafkaSinkExternalContextFactory(kafka.getContainer(), Collections.emptyList());
+
+        /**
+         * Disable the metric test because of the metric
+         * bug(https://issues.apache.org/jira/browse/FLINK-26126).
+         */
+        @Disabled

Review comment:
       ```suggestion
       @Disabled("Skip metric test until FLINK-26126 fixed")
   ```

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;
+    private static final int NUM_RECORDS_UPPER_BOUND = 500;
+    private static final int NUM_RECORDS_LOWER_BOUND = 100;
+
+    protected String bootstrapServers;
+    protected final String topicName;
+
+    private final List<ExternalSystemDataReader<String>> readers = new ArrayList<>();
+
+    protected int numSplits = 0;
+
+    private List<URL> connectorJarPaths;
+
+    protected final AdminClient kafkaAdminClient;
+
+    public KafkaSinkExternalContext(String bootstrapServers, List<URL> connectorJarPaths) {
+        this.bootstrapServers = bootstrapServers;
+        this.connectorJarPaths = connectorJarPaths;
+        this.topicName =
+                TOPIC_NAME_PREFIX + "-" + ThreadLocalRandom.current().nextLong(Long.MAX_VALUE);
+        kafkaAdminClient = createAdminClient();
+    }
+
+    protected void createTopic(String topicName, int numPartitions, short replicationFactor) {
+        LOG.debug(
+                "Creating new Kafka topic {} with {} partitions and {} replicas",
+                topicName,
+                numPartitions,
+                replicationFactor);
+        NewTopic newTopic = new NewTopic(topicName, numPartitions, replicationFactor);
+        try {
+            kafkaAdminClient
+                    .createTopics(Collections.singletonList(newTopic))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            throw new RuntimeException(String.format("Cannot create topic '%s'", topicName), e);
+        }
+    }
+
+    protected void deleteTopic(String topicName) {
+        LOG.debug("Deleting Kafka topic {}", topicName);
+        try {
+            kafkaAdminClient
+                    .deleteTopics(Collections.singletonList(topicName))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) {
+                throw new RuntimeException(String.format("Cannot delete topic '%s'", topicName), e);
+            }
+        }
+    }

Review comment:
       Do we need this methods  to be `protected` ?

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;
+    private static final int NUM_RECORDS_UPPER_BOUND = 500;
+    private static final int NUM_RECORDS_LOWER_BOUND = 100;
+
+    protected String bootstrapServers;
+    protected final String topicName;
+
+    private final List<ExternalSystemDataReader<String>> readers = new ArrayList<>();
+
+    protected int numSplits = 0;
+
+    private List<URL> connectorJarPaths;
+
+    protected final AdminClient kafkaAdminClient;
+
+    public KafkaSinkExternalContext(String bootstrapServers, List<URL> connectorJarPaths) {
+        this.bootstrapServers = bootstrapServers;
+        this.connectorJarPaths = connectorJarPaths;
+        this.topicName =
+                TOPIC_NAME_PREFIX + "-" + ThreadLocalRandom.current().nextLong(Long.MAX_VALUE);
+        kafkaAdminClient = createAdminClient();
+    }
+
+    protected void createTopic(String topicName, int numPartitions, short replicationFactor) {
+        LOG.debug(
+                "Creating new Kafka topic {} with {} partitions and {} replicas",
+                topicName,
+                numPartitions,
+                replicationFactor);
+        NewTopic newTopic = new NewTopic(topicName, numPartitions, replicationFactor);
+        try {
+            kafkaAdminClient
+                    .createTopics(Collections.singletonList(newTopic))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            throw new RuntimeException(String.format("Cannot create topic '%s'", topicName), e);
+        }
+    }
+
+    protected void deleteTopic(String topicName) {
+        LOG.debug("Deleting Kafka topic {}", topicName);
+        try {
+            kafkaAdminClient
+                    .deleteTopics(Collections.singletonList(topicName))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) {
+                throw new RuntimeException(String.format("Cannot delete topic '%s'", topicName), e);
+            }
+        }
+    }
+
+    private AdminClient createAdminClient() {
+        Properties config = new Properties();
+        config.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        return AdminClient.create(config);
+    }
+
+    @Override
+    public Sink<String> createSink(TestingSinkSettings sinkSettings) {
+        if (!topicExists(topicName)) {
+            createTopic(topicName, 4, (short) 1);
+        }
+
+        KafkaSinkBuilder<String> builder = KafkaSink.builder();
+        Properties properties = new Properties();
+        properties.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, 900000);
+        builder.setBootstrapServers(bootstrapServers)
+                .setDeliverGuarantee(toDeliveryGuarantee(sinkSettings.getCheckpointingMode()))
+                .setTransactionalIdPrefix("testingFramework")
+                .setKafkaProducerConfig(properties)
+                .setRecordSerializer(
+                        KafkaRecordSerializationSchema.builder()
+                                .setTopic(topicName)
+                                .setValueSerializationSchema(new SimpleStringSchema())
+                                .build());
+        return builder.build();
+    }
+
+    @Override
+    public ExternalSystemDataReader<String> createSinkDataReader(TestingSinkSettings sinkSettings) {
+        LOG.info("Fetching descriptions for topic: {}", topicName);

Review comment:
       I didn't get this meaning of log.

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;
+    private static final int NUM_RECORDS_UPPER_BOUND = 500;
+    private static final int NUM_RECORDS_LOWER_BOUND = 100;
+
+    protected String bootstrapServers;
+    protected final String topicName;
+
+    private final List<ExternalSystemDataReader<String>> readers = new ArrayList<>();
+
+    protected int numSplits = 0;
+
+    private List<URL> connectorJarPaths;
+
+    protected final AdminClient kafkaAdminClient;
+
+    public KafkaSinkExternalContext(String bootstrapServers, List<URL> connectorJarPaths) {
+        this.bootstrapServers = bootstrapServers;
+        this.connectorJarPaths = connectorJarPaths;
+        this.topicName =
+                TOPIC_NAME_PREFIX + "-" + ThreadLocalRandom.current().nextLong(Long.MAX_VALUE);
+        kafkaAdminClient = createAdminClient();
+    }
+
+    protected void createTopic(String topicName, int numPartitions, short replicationFactor) {
+        LOG.debug(
+                "Creating new Kafka topic {} with {} partitions and {} replicas",
+                topicName,
+                numPartitions,
+                replicationFactor);
+        NewTopic newTopic = new NewTopic(topicName, numPartitions, replicationFactor);
+        try {
+            kafkaAdminClient
+                    .createTopics(Collections.singletonList(newTopic))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            throw new RuntimeException(String.format("Cannot create topic '%s'", topicName), e);
+        }
+    }
+
+    protected void deleteTopic(String topicName) {
+        LOG.debug("Deleting Kafka topic {}", topicName);
+        try {
+            kafkaAdminClient
+                    .deleteTopics(Collections.singletonList(topicName))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) {
+                throw new RuntimeException(String.format("Cannot delete topic '%s'", topicName), e);
+            }
+        }
+    }
+
+    private AdminClient createAdminClient() {
+        Properties config = new Properties();
+        config.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        return AdminClient.create(config);
+    }
+
+    @Override
+    public Sink<String> createSink(TestingSinkSettings sinkSettings) {
+        if (!topicExists(topicName)) {
+            createTopic(topicName, 4, (short) 1);
+        }
+
+        KafkaSinkBuilder<String> builder = KafkaSink.builder();
+        Properties properties = new Properties();
+        properties.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, 900000);
+        builder.setBootstrapServers(bootstrapServers)
+                .setDeliverGuarantee(toDeliveryGuarantee(sinkSettings.getCheckpointingMode()))
+                .setTransactionalIdPrefix("testingFramework")
+                .setKafkaProducerConfig(properties)
+                .setRecordSerializer(
+                        KafkaRecordSerializationSchema.builder()
+                                .setTopic(topicName)
+                                .setValueSerializationSchema(new SimpleStringSchema())
+                                .build());
+        return builder.build();
+    }
+
+    @Override
+    public ExternalSystemDataReader<String> createSinkDataReader(TestingSinkSettings sinkSettings) {
+        LOG.info("Fetching descriptions for topic: {}", topicName);
+        final Map<String, TopicDescription> topicMetadata =
+                getTopicMetadata(Arrays.asList(topicName));
+
+        Set<TopicPartition> subscribedPartitions = new HashSet<>();
+        for (TopicDescription topic : topicMetadata.values()) {
+            for (TopicPartitionInfo partition : topic.partitions()) {
+                subscribedPartitions.add(new TopicPartition(topic.name(), partition.partition()));
+            }
+        }
+
+        Properties properties = new Properties();
+        properties.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG,
+                "flink-kafka-test" + subscribedPartitions.hashCode());
+        properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        properties.setProperty(
+                ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
+                StringDeserializer.class.getCanonicalName());
+        properties.setProperty(
+                ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
+                StringDeserializer.class.getCanonicalName());
+        if (EXACTLY_ONCE.equals(sinkSettings.getCheckpointingMode())) {
+            // default is read_uncommitted
+            properties.setProperty(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
+        }
+        properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
+        readers.add(new KafkaDataReader(properties, subscribedPartitions));
+        return readers.get(readers.size() - 1);
+    }
+
+    @Override
+    public List<String> generateTestData(TestingSinkSettings sinkSettings, long seed) {
+        Random random = new Random(seed);
+        List<String> randomStringRecords = new ArrayList<>();
+        int recordNum =
+                random.nextInt(NUM_RECORDS_UPPER_BOUND - NUM_RECORDS_LOWER_BOUND)
+                        + NUM_RECORDS_LOWER_BOUND;
+        for (int i = 0; i < recordNum; i++) {
+            int stringLength = random.nextInt(50) + 1;
+            randomStringRecords.add(generateRandomString(stringLength, random));
+        }
+        return randomStringRecords;
+    }
+
+    private String generateRandomString(int length, Random random) {
+        String alphaNumericString =
+                "ABCDEFGHIJKLMNOPQRSTUVWXYZ" + "abcdefghijklmnopqrstuvwxyz" + "0123456789";
+        StringBuilder sb = new StringBuilder();
+        for (int i = 0; i < length; ++i) {
+            sb.append(alphaNumericString.charAt(random.nextInt(alphaNumericString.length())));
+        }
+        return sb.toString();
+    }
+
+    protected Map<String, TopicDescription> getTopicMetadata(List<String> topics) {
+        try {
+            return kafkaAdminClient.describeTopics(topics).all().get();
+        } catch (Exception e) {
+            throw new RuntimeException(
+                    String.format("Failed to get metadata for topics %s.", topics), e);
+        }
+    }
+
+    private boolean topicExists(String topic) {
+        try {
+            kafkaAdminClient.describeTopics(Arrays.asList(topic)).all().get();
+            return true;
+        } catch (Exception e) {
+            return false;
+        }
+    }
+
+    @Override
+    public void close() {
+        if (numSplits != 0) {
+            deleteTopic(topicName);
+        }
+        readers.forEach(

Review comment:
       check null before cleanup/release resources

##########
File path: flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/src/test/java/org/apache/flink/tests/util/kafka/KafkaSinkE2ECase.java
##########
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.tests.util.kafka;
+
+import org.apache.flink.connector.kafka.sink.testutils.KafkaSinkExternalContextFactory;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.external.DefaultContainerizedExternalSystem;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.junit.annotations.TestContext;
+import org.apache.flink.connector.testframe.junit.annotations.TestEnv;
+import org.apache.flink.connector.testframe.junit.annotations.TestExternalSystem;
+import org.apache.flink.connector.testframe.junit.annotations.TestSemantics;
+import org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.tests.util.TestUtils;
+import org.apache.flink.tests.util.flink.FlinkContainerTestEnvironment;
+import org.apache.flink.util.DockerImageVersions;
+
+import org.junit.jupiter.api.Disabled;
+import org.junit.jupiter.api.TestTemplate;
+import org.testcontainers.containers.KafkaContainer;
+import org.testcontainers.utility.DockerImageName;
+
+import java.util.Arrays;
+
+/** Kafka sink E2E test based on connector testing framework. */
+@SuppressWarnings("unused")
+public class KafkaSinkE2ECase extends SinkTestSuiteBase<String> {
+    private static final String KAFKA_HOSTNAME = "kafka";
+
+    @TestSemantics
+    CheckpointingMode[] semantics =
+            new CheckpointingMode[] {
+                CheckpointingMode.EXACTLY_ONCE, CheckpointingMode.AT_LEAST_ONCE
+            };
+
+    // Defines TestEnvironment
+    @TestEnv FlinkContainerTestEnvironment flink = new FlinkContainerTestEnvironment(1, 6);
+
+    // Defines ConnectorExternalSystem
+    @TestExternalSystem
+    DefaultContainerizedExternalSystem<KafkaContainer> kafka =
+            DefaultContainerizedExternalSystem.builder()
+                    .fromContainer(
+                            new KafkaContainer(DockerImageName.parse(DockerImageVersions.KAFKA))
+                                    .withNetworkAliases(KAFKA_HOSTNAME))
+                    .bindWithFlinkContainer(flink.getFlinkContainers().getJobManager())
+                    .build();
+
+    // Defines 2 External context Factories, so test cases will be invoked twice using these two
+    // kinds of external contexts.
+    @SuppressWarnings("unused")

Review comment:
       redundant annotation with line 43

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/external/sink/DataStreamSinkV1ExternalContext.java
##########
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.external.sink;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.connector.sink.Sink;
+
+/**
+ * External context for DataStream sinks which is sink version 1.

Review comment:
       ```suggestion
    * External context for DataStream sinks whose version is V1.
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/external/sink/DataStreamSinkV2ExternalContext.java
##########
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.external.sink;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.connector.sink2.Sink;
+
+/**
+ * External context for DataStream sinks which is sink version 2.

Review comment:
       ```suggestion
    * External context for DataStream sinks whose version is V2.
   ```

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;
+    private static final int NUM_RECORDS_UPPER_BOUND = 500;
+    private static final int NUM_RECORDS_LOWER_BOUND = 100;
+
+    protected String bootstrapServers;
+    protected final String topicName;
+
+    private final List<ExternalSystemDataReader<String>> readers = new ArrayList<>();
+
+    protected int numSplits = 0;
+
+    private List<URL> connectorJarPaths;
+
+    protected final AdminClient kafkaAdminClient;
+
+    public KafkaSinkExternalContext(String bootstrapServers, List<URL> connectorJarPaths) {
+        this.bootstrapServers = bootstrapServers;
+        this.connectorJarPaths = connectorJarPaths;
+        this.topicName =
+                TOPIC_NAME_PREFIX + "-" + ThreadLocalRandom.current().nextLong(Long.MAX_VALUE);
+        kafkaAdminClient = createAdminClient();
+    }
+
+    protected void createTopic(String topicName, int numPartitions, short replicationFactor) {
+        LOG.debug(
+                "Creating new Kafka topic {} with {} partitions and {} replicas",
+                topicName,
+                numPartitions,
+                replicationFactor);
+        NewTopic newTopic = new NewTopic(topicName, numPartitions, replicationFactor);
+        try {
+            kafkaAdminClient
+                    .createTopics(Collections.singletonList(newTopic))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            throw new RuntimeException(String.format("Cannot create topic '%s'", topicName), e);
+        }
+    }
+
+    protected void deleteTopic(String topicName) {
+        LOG.debug("Deleting Kafka topic {}", topicName);
+        try {
+            kafkaAdminClient
+                    .deleteTopics(Collections.singletonList(topicName))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) {
+                throw new RuntimeException(String.format("Cannot delete topic '%s'", topicName), e);
+            }
+        }
+    }
+
+    private AdminClient createAdminClient() {
+        Properties config = new Properties();
+        config.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        return AdminClient.create(config);
+    }
+
+    @Override
+    public Sink<String> createSink(TestingSinkSettings sinkSettings) {
+        if (!topicExists(topicName)) {
+            createTopic(topicName, 4, (short) 1);
+        }
+
+        KafkaSinkBuilder<String> builder = KafkaSink.builder();
+        Properties properties = new Properties();
+        properties.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, 900000);
+        builder.setBootstrapServers(bootstrapServers)
+                .setDeliverGuarantee(toDeliveryGuarantee(sinkSettings.getCheckpointingMode()))
+                .setTransactionalIdPrefix("testingFramework")
+                .setKafkaProducerConfig(properties)
+                .setRecordSerializer(
+                        KafkaRecordSerializationSchema.builder()
+                                .setTopic(topicName)
+                                .setValueSerializationSchema(new SimpleStringSchema())
+                                .build());
+        return builder.build();
+    }
+
+    @Override
+    public ExternalSystemDataReader<String> createSinkDataReader(TestingSinkSettings sinkSettings) {
+        LOG.info("Fetching descriptions for topic: {}", topicName);
+        final Map<String, TopicDescription> topicMetadata =
+                getTopicMetadata(Arrays.asList(topicName));
+
+        Set<TopicPartition> subscribedPartitions = new HashSet<>();
+        for (TopicDescription topic : topicMetadata.values()) {
+            for (TopicPartitionInfo partition : topic.partitions()) {
+                subscribedPartitions.add(new TopicPartition(topic.name(), partition.partition()));
+            }
+        }
+
+        Properties properties = new Properties();
+        properties.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG,
+                "flink-kafka-test" + subscribedPartitions.hashCode());
+        properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        properties.setProperty(
+                ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
+                StringDeserializer.class.getCanonicalName());
+        properties.setProperty(
+                ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
+                StringDeserializer.class.getCanonicalName());
+        if (EXACTLY_ONCE.equals(sinkSettings.getCheckpointingMode())) {
+            // default is read_uncommitted
+            properties.setProperty(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
+        }
+        properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
+        readers.add(new KafkaDataReader(properties, subscribedPartitions));
+        return readers.get(readers.size() - 1);
+    }
+
+    @Override
+    public List<String> generateTestData(TestingSinkSettings sinkSettings, long seed) {
+        Random random = new Random(seed);
+        List<String> randomStringRecords = new ArrayList<>();
+        int recordNum =
+                random.nextInt(NUM_RECORDS_UPPER_BOUND - NUM_RECORDS_LOWER_BOUND)
+                        + NUM_RECORDS_LOWER_BOUND;
+        for (int i = 0; i < recordNum; i++) {
+            int stringLength = random.nextInt(50) + 1;
+            randomStringRecords.add(generateRandomString(stringLength, random));
+        }
+        return randomStringRecords;
+    }
+
+    private String generateRandomString(int length, Random random) {
+        String alphaNumericString =
+                "ABCDEFGHIJKLMNOPQRSTUVWXYZ" + "abcdefghijklmnopqrstuvwxyz" + "0123456789";
+        StringBuilder sb = new StringBuilder();
+        for (int i = 0; i < length; ++i) {
+            sb.append(alphaNumericString.charAt(random.nextInt(alphaNumericString.length())));
+        }
+        return sb.toString();

Review comment:
       The test data pattern `alphaNumericString` and magic number `50` can be constants with one line note

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSource.java
##########
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.Source;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.api.connector.source.SplitEnumeratorContext;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumState;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumStateSerializer;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumerator;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplitSerializer;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+
+import java.util.List;
+
+/**
+ * The source reads data from a list and stops reading at the fixed position. The source will wait
+ * until the checkpoint or savepoint triggers.
+ *
+ * <p>Note that this source must be of parallelism 1.
+ */
+public class FromElementsSource<OUT> implements Source<OUT, FromElementsSplit, NoOpEnumState> {
+    // Boundedness

Review comment:
       useless java doc

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.

Review comment:
       `as source splits.` ?

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSource.java
##########
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.Source;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.api.connector.source.SplitEnumeratorContext;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumState;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumStateSerializer;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumerator;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplitSerializer;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+
+import java.util.List;
+
+/**
+ * The source reads data from a list and stops reading at the fixed position. The source will wait
+ * until the checkpoint or savepoint triggers.
+ *
+ * <p>Note that this source must be of parallelism 1.

Review comment:
       ```suggestion
    * <p>Note: This parallelism of source must be 1.
   ```

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContextFactory.java
##########
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.connector.testframe.external.ExternalContextFactory;
+
+import org.testcontainers.containers.KafkaContainer;
+
+import java.net.URL;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/** Kafka table sink external context factory. */
+public class KafkaSinkExternalContextFactory
+        implements ExternalContextFactory<KafkaSinkExternalContext> {
+
+    private final KafkaContainer kafkaContainer;
+    private final List<URL> connectorJars;
+
+    public KafkaSinkExternalContextFactory(KafkaContainer kafkaContainer, List<URL> connectorJars) {
+        this.kafkaContainer = kafkaContainer;
+        this.connectorJars = connectorJars;
+    }
+
+    protected String getBootstrapServer() {

Review comment:
       No need `protected` 

##########
File path: flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/testutils/KafkaSinkExternalContext.java
##########
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.sink.testutils;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+import org.apache.flink.connector.kafka.sink.KafkaSink;
+import org.apache.flink.connector.kafka.sink.KafkaSinkBuilder;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.streaming.api.CheckpointingMode;
+
+import org.apache.commons.lang3.exception.ExceptionUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.AdminClientConfig;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.admin.TopicDescription;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.TopicPartitionInfo;
+import org.apache.kafka.common.errors.UnknownTopicOrPartitionException;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+
+/**
+ * A Kafka external context that will create only one topic and use partitions in that topic as
+ * source splits.
+ */
+public class KafkaSinkExternalContext implements DataStreamSinkV2ExternalContext<String> {
+
+    private static final Logger LOG = LoggerFactory.getLogger(KafkaSinkExternalContext.class);
+
+    private static final String TOPIC_NAME_PREFIX = "kafka-single-topic";
+    private static final int DEFAULT_TIMEOUT = 30;
+    private static final int NUM_RECORDS_UPPER_BOUND = 500;
+    private static final int NUM_RECORDS_LOWER_BOUND = 100;
+
+    protected String bootstrapServers;
+    protected final String topicName;
+
+    private final List<ExternalSystemDataReader<String>> readers = new ArrayList<>();
+
+    protected int numSplits = 0;
+
+    private List<URL> connectorJarPaths;
+
+    protected final AdminClient kafkaAdminClient;
+
+    public KafkaSinkExternalContext(String bootstrapServers, List<URL> connectorJarPaths) {
+        this.bootstrapServers = bootstrapServers;
+        this.connectorJarPaths = connectorJarPaths;
+        this.topicName =
+                TOPIC_NAME_PREFIX + "-" + ThreadLocalRandom.current().nextLong(Long.MAX_VALUE);
+        kafkaAdminClient = createAdminClient();
+    }
+
+    protected void createTopic(String topicName, int numPartitions, short replicationFactor) {
+        LOG.debug(
+                "Creating new Kafka topic {} with {} partitions and {} replicas",
+                topicName,
+                numPartitions,
+                replicationFactor);
+        NewTopic newTopic = new NewTopic(topicName, numPartitions, replicationFactor);
+        try {
+            kafkaAdminClient
+                    .createTopics(Collections.singletonList(newTopic))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            throw new RuntimeException(String.format("Cannot create topic '%s'", topicName), e);
+        }
+    }
+
+    protected void deleteTopic(String topicName) {
+        LOG.debug("Deleting Kafka topic {}", topicName);
+        try {
+            kafkaAdminClient
+                    .deleteTopics(Collections.singletonList(topicName))
+                    .all()
+                    .get(DEFAULT_TIMEOUT, TimeUnit.SECONDS);
+        } catch (Exception e) {
+            if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) {
+                throw new RuntimeException(String.format("Cannot delete topic '%s'", topicName), e);
+            }
+        }
+    }
+
+    private AdminClient createAdminClient() {
+        Properties config = new Properties();
+        config.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        return AdminClient.create(config);
+    }
+
+    @Override
+    public Sink<String> createSink(TestingSinkSettings sinkSettings) {
+        if (!topicExists(topicName)) {
+            createTopic(topicName, 4, (short) 1);
+        }
+
+        KafkaSinkBuilder<String> builder = KafkaSink.builder();
+        Properties properties = new Properties();
+        properties.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, 900000);
+        builder.setBootstrapServers(bootstrapServers)
+                .setDeliverGuarantee(toDeliveryGuarantee(sinkSettings.getCheckpointingMode()))
+                .setTransactionalIdPrefix("testingFramework")
+                .setKafkaProducerConfig(properties)
+                .setRecordSerializer(
+                        KafkaRecordSerializationSchema.builder()
+                                .setTopic(topicName)
+                                .setValueSerializationSchema(new SimpleStringSchema())
+                                .build());
+        return builder.build();
+    }
+
+    @Override
+    public ExternalSystemDataReader<String> createSinkDataReader(TestingSinkSettings sinkSettings) {
+        LOG.info("Fetching descriptions for topic: {}", topicName);
+        final Map<String, TopicDescription> topicMetadata =
+                getTopicMetadata(Arrays.asList(topicName));
+
+        Set<TopicPartition> subscribedPartitions = new HashSet<>();
+        for (TopicDescription topic : topicMetadata.values()) {
+            for (TopicPartitionInfo partition : topic.partitions()) {
+                subscribedPartitions.add(new TopicPartition(topic.name(), partition.partition()));
+            }
+        }
+
+        Properties properties = new Properties();
+        properties.setProperty(
+                ConsumerConfig.GROUP_ID_CONFIG,
+                "flink-kafka-test" + subscribedPartitions.hashCode());
+        properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+        properties.setProperty(
+                ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
+                StringDeserializer.class.getCanonicalName());
+        properties.setProperty(
+                ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
+                StringDeserializer.class.getCanonicalName());
+        if (EXACTLY_ONCE.equals(sinkSettings.getCheckpointingMode())) {
+            // default is read_uncommitted
+            properties.setProperty(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
+        }
+        properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
+        readers.add(new KafkaDataReader(properties, subscribedPartitions));
+        return readers.get(readers.size() - 1);
+    }
+
+    @Override
+    public List<String> generateTestData(TestingSinkSettings sinkSettings, long seed) {
+        Random random = new Random(seed);
+        List<String> randomStringRecords = new ArrayList<>();
+        int recordNum =
+                random.nextInt(NUM_RECORDS_UPPER_BOUND - NUM_RECORDS_LOWER_BOUND)
+                        + NUM_RECORDS_LOWER_BOUND;
+        for (int i = 0; i < recordNum; i++) {
+            int stringLength = random.nextInt(50) + 1;
+            randomStringRecords.add(generateRandomString(stringLength, random));
+        }
+        return randomStringRecords;
+    }
+
+    private String generateRandomString(int length, Random random) {
+        String alphaNumericString =
+                "ABCDEFGHIJKLMNOPQRSTUVWXYZ" + "abcdefghijklmnopqrstuvwxyz" + "0123456789";
+        StringBuilder sb = new StringBuilder();
+        for (int i = 0; i < length; ++i) {
+            sb.append(alphaNumericString.charAt(random.nextInt(alphaNumericString.length())));
+        }
+        return sb.toString();
+    }
+
+    protected Map<String, TopicDescription> getTopicMetadata(List<String> topics) {
+        try {
+            return kafkaAdminClient.describeTopics(topics).all().get();
+        } catch (Exception e) {
+            throw new RuntimeException(
+                    String.format("Failed to get metadata for topics %s.", topics), e);
+        }
+    }
+
+    private boolean topicExists(String topic) {
+        try {
+            kafkaAdminClient.describeTopics(Arrays.asList(topic)).all().get();
+            return true;
+        } catch (Exception e) {
+            return false;
+        }
+    }
+
+    @Override
+    public void close() {
+        if (numSplits != 0) {
+            deleteTopic(topicName);
+        }
+        readers.forEach(
+                reader -> {
+                    try {
+                        reader.close();
+                    } catch (Exception e) {
+                        kafkaAdminClient.close();
+                        throw new RuntimeException("Cannot close split writer", e);
+                    }
+                });
+        readers.clear();
+        kafkaAdminClient.close();
+    }
+
+    @Override
+    public String toString() {
+        return "Single-topic Kafka";
+    }
+
+    @Override
+    public List<URL> getConnectorJarPaths() {
+        return connectorJarPaths;
+    }
+
+    @Override
+    public TypeInformation<String> getProducedType() {
+        return TypeInformation.of(String.class);
+    }
+
+    private DeliveryGuarantee toDeliveryGuarantee(CheckpointingMode checkpointingMode) {
+        switch (checkpointingMode) {
+            case EXACTLY_ONCE:
+                return DeliveryGuarantee.EXACTLY_ONCE;
+            case AT_LEAST_ONCE:
+                return DeliveryGuarantee.AT_LEAST_ONCE;
+            default:
+                throw new IllegalArgumentException(
+                        "Only exactly-once and al-least-once checkpointing mode are supported");

Review comment:
       ```suggestion
                   throw new IllegalArgumentException(
                           String.format("Only exactly-once and al-least-once checkpointing mode are supported, but actual is %s.", checkpointingMode));
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSource.java
##########
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.Source;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.api.connector.source.SplitEnumeratorContext;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumState;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumStateSerializer;
+import org.apache.flink.connector.testframe.source.enumerator.NoOpEnumerator;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplitSerializer;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+
+import java.util.List;
+
+/**
+ * The source reads data from a list and stops reading at the fixed position. The source will wait
+ * until the checkpoint or savepoint triggers.
+ *
+ * <p>Note that this source must be of parallelism 1.
+ */
+public class FromElementsSource<OUT> implements Source<OUT, FromElementsSplit, NoOpEnumState> {
+    // Boundedness
+    private Boundedness boundedness;
+
+    private List<OUT> elements;
+
+    private Integer successNum;
+
+    public FromElementsSource(List<OUT> elements) {
+        this.elements = elements;
+    }
+
+    public FromElementsSource(Boundedness boundedness, List<OUT> elements, Integer successNum) {

Review comment:
       how about `emittedElementsNum` ?

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSourceReader.java
##########
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.ReaderOutput;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.core.io.InputStatus;
+import org.apache.flink.metrics.Counter;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.core.io.InputStatus.MORE_AVAILABLE;
+
+/** The source reader for collections of elements. */
+public class FromElementsSourceReader<T> implements SourceReader<T, FromElementsSplit> {
+    private static final Logger LOG = LoggerFactory.getLogger(FromElementsSourceReader.class);
+
+    protected volatile int numElementsEmitted;
+    protected volatile boolean isRunning = true;
+
+    /** The context of this source reader. */
+    protected SourceReaderContext context;
+
+    protected List<T> elements;
+    protected Counter numRecordInCounter;
+
+    public FromElementsSourceReader(List<T> elements, SourceReaderContext context) {
+        this.context = context;
+        this.numElementsEmitted = 0;
+        this.elements = elements;
+        this.numRecordInCounter = context.metricGroup().getIOMetricGroup().getNumRecordsInCounter();
+    }
+
+    @Override
+    public void start() {}
+
+    @Override
+    public InputStatus pollNext(ReaderOutput<T> output) throws Exception {
+        if (isRunning && numElementsEmitted < elements.size()) {
+            output.collect(elements.get(numElementsEmitted));
+            numElementsEmitted++;
+            numRecordInCounter.inc();
+            return MORE_AVAILABLE;
+        }
+        return InputStatus.END_OF_INPUT;
+    }
+
+    @Override
+    public List<FromElementsSplit> snapshotState(long checkpointId) {
+        return Arrays.asList(new FromElementsSplit(numElementsEmitted));
+    }
+
+    @Override
+    public CompletableFuture<Void> isAvailable() {
+        CompletableFuture<Void> future = new CompletableFuture<>();
+        future.complete(null);
+        return future;

Review comment:
       return CompletableFuture.completedFuture(null);

##########
File path: flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/src/test/java/org/apache/flink/tests/util/kafka/KafkaSinkE2ECase.java
##########
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.tests.util.kafka;
+
+import org.apache.flink.connector.kafka.sink.testutils.KafkaSinkExternalContextFactory;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.external.DefaultContainerizedExternalSystem;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.junit.annotations.TestContext;
+import org.apache.flink.connector.testframe.junit.annotations.TestEnv;
+import org.apache.flink.connector.testframe.junit.annotations.TestExternalSystem;
+import org.apache.flink.connector.testframe.junit.annotations.TestSemantics;
+import org.apache.flink.connector.testframe.testsuites.SinkTestSuiteBase;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.tests.util.TestUtils;
+import org.apache.flink.tests.util.flink.FlinkContainerTestEnvironment;
+import org.apache.flink.util.DockerImageVersions;
+
+import org.junit.jupiter.api.Disabled;
+import org.junit.jupiter.api.TestTemplate;
+import org.testcontainers.containers.KafkaContainer;
+import org.testcontainers.utility.DockerImageName;
+
+import java.util.Arrays;
+
+/** Kafka sink E2E test based on connector testing framework. */
+@SuppressWarnings("unused")
+public class KafkaSinkE2ECase extends SinkTestSuiteBase<String> {
+    private static final String KAFKA_HOSTNAME = "kafka";
+
+    @TestSemantics
+    CheckpointingMode[] semantics =
+            new CheckpointingMode[] {
+                CheckpointingMode.EXACTLY_ONCE, CheckpointingMode.AT_LEAST_ONCE
+            };
+
+    // Defines TestEnvironment
+    @TestEnv FlinkContainerTestEnvironment flink = new FlinkContainerTestEnvironment(1, 6);
+
+    // Defines ConnectorExternalSystem
+    @TestExternalSystem
+    DefaultContainerizedExternalSystem<KafkaContainer> kafka =
+            DefaultContainerizedExternalSystem.builder()
+                    .fromContainer(
+                            new KafkaContainer(DockerImageName.parse(DockerImageVersions.KAFKA))
+                                    .withNetworkAliases(KAFKA_HOSTNAME))
+                    .bindWithFlinkContainer(flink.getFlinkContainers().getJobManager())
+                    .build();
+
+    // Defines 2 External context Factories, so test cases will be invoked twice using these two
+    // kinds of external contexts.
+    @SuppressWarnings("unused")
+    @TestContext
+    KafkaSinkExternalContextFactory contextFactory =
+            new KafkaSinkExternalContextFactory(
+                    kafka.getContainer(),
+                    Arrays.asList(
+                            TestUtils.getResource("kafka-connector.jar")
+                                    .toAbsolutePath()
+                                    .toUri()
+                                    .toURL(),
+                            TestUtils.getResource("kafka-clients.jar")
+                                    .toAbsolutePath()
+                                    .toUri()
+                                    .toURL(),
+                            TestUtils.getResource("flink-connector-testing.jar")
+                                    .toAbsolutePath()
+                                    .toUri()
+                                    .toURL()));
+
+    public KafkaSinkE2ECase() throws Exception {}
+
+    /**
+     * Disable the metric test because of the metric
+     * bug(https://issues.apache.org/jira/browse/FLINK-26126).
+     */
+    @Disabled

Review comment:
       ```suggestion
         @Disabled("Skip metric test until FLINK-26126 fixed")
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSourceReader.java
##########
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.ReaderOutput;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.core.io.InputStatus;
+import org.apache.flink.metrics.Counter;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.core.io.InputStatus.MORE_AVAILABLE;
+
+/** The source reader for collections of elements. */
+public class FromElementsSourceReader<T> implements SourceReader<T, FromElementsSplit> {
+    private static final Logger LOG = LoggerFactory.getLogger(FromElementsSourceReader.class);
+
+    protected volatile int numElementsEmitted;
+    protected volatile boolean isRunning = true;
+
+    /** The context of this source reader. */
+    protected SourceReaderContext context;
+
+    protected List<T> elements;
+    protected Counter numRecordInCounter;
+
+    public FromElementsSourceReader(List<T> elements, SourceReaderContext context) {
+        this.context = context;
+        this.numElementsEmitted = 0;
+        this.elements = elements;
+        this.numRecordInCounter = context.metricGroup().getIOMetricGroup().getNumRecordsInCounter();
+    }
+
+    @Override
+    public void start() {}
+
+    @Override
+    public InputStatus pollNext(ReaderOutput<T> output) throws Exception {
+        if (isRunning && numElementsEmitted < elements.size()) {
+            output.collect(elements.get(numElementsEmitted));
+            numElementsEmitted++;
+            numRecordInCounter.inc();
+            return MORE_AVAILABLE;
+        }
+        return InputStatus.END_OF_INPUT;
+    }
+
+    @Override
+    public List<FromElementsSplit> snapshotState(long checkpointId) {
+        return Arrays.asList(new FromElementsSplit(numElementsEmitted));
+    }
+
+    @Override
+    public CompletableFuture<Void> isAvailable() {
+        CompletableFuture<Void> future = new CompletableFuture<>();
+        future.complete(null);
+        return future;
+    }
+
+    @Override
+    public void addSplits(List<FromElementsSplit> splits) {
+        numElementsEmitted = splits.get(0).getEmitNum();
+        LOG.info("ListSourceReader restores from {}.", numElementsEmitted);
+    }
+
+    @Override
+    public void notifyNoMoreSplits() {}
+
+    @Override
+    public void close() throws Exception {
+        isRunning = false;
+    }
+
+    @Override
+    public void notifyCheckpointComplete(long checkpointId) throws Exception {
+        LOG.info("{} checkpoint finished.", checkpointId);

Review comment:
       minor:
   ```suggestion
           LOG.info("checkpoint {} finished.", checkpointId);
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSourceReader.java
##########
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.ReaderOutput;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.core.io.InputStatus;
+import org.apache.flink.metrics.Counter;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.core.io.InputStatus.MORE_AVAILABLE;
+
+/** The source reader for collections of elements. */
+public class FromElementsSourceReader<T> implements SourceReader<T, FromElementsSplit> {
+    private static final Logger LOG = LoggerFactory.getLogger(FromElementsSourceReader.class);
+
+    protected volatile int numElementsEmitted;
+    protected volatile boolean isRunning = true;
+
+    /** The context of this source reader. */
+    protected SourceReaderContext context;
+
+    protected List<T> elements;
+    protected Counter numRecordInCounter;
+
+    public FromElementsSourceReader(List<T> elements, SourceReaderContext context) {
+        this.context = context;
+        this.numElementsEmitted = 0;
+        this.elements = elements;
+        this.numRecordInCounter = context.metricGroup().getIOMetricGroup().getNumRecordsInCounter();
+    }
+
+    @Override
+    public void start() {}
+
+    @Override
+    public InputStatus pollNext(ReaderOutput<T> output) throws Exception {
+        if (isRunning && numElementsEmitted < elements.size()) {
+            output.collect(elements.get(numElementsEmitted));
+            numElementsEmitted++;
+            numRecordInCounter.inc();
+            return MORE_AVAILABLE;
+        }
+        return InputStatus.END_OF_INPUT;
+    }
+
+    @Override
+    public List<FromElementsSplit> snapshotState(long checkpointId) {
+        return Arrays.asList(new FromElementsSplit(numElementsEmitted));
+    }
+
+    @Override
+    public CompletableFuture<Void> isAvailable() {
+        CompletableFuture<Void> future = new CompletableFuture<>();
+        future.complete(null);
+        return future;
+    }
+
+    @Override
+    public void addSplits(List<FromElementsSplit> splits) {
+        numElementsEmitted = splits.get(0).getEmitNum();
+        LOG.info("ListSourceReader restores from {}.", numElementsEmitted);

Review comment:
       ```suggestion
           LOG.info("FromElementsSourceReader restores from {}.", numElementsEmitted);
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/split/FromElementsSplitSerializer.java
##########
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source.split;
+
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+
+/** The split serializer for the list source. */

Review comment:
       ```suggestion
   /** The split serializer for the {@link FromElementsSource}. */
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSourceReader.java
##########
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.ReaderOutput;
+import org.apache.flink.api.connector.source.SourceReader;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.core.io.InputStatus;
+import org.apache.flink.metrics.Counter;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.core.io.InputStatus.MORE_AVAILABLE;
+
+/** The source reader for collections of elements. */
+public class FromElementsSourceReader<T> implements SourceReader<T, FromElementsSplit> {
+    private static final Logger LOG = LoggerFactory.getLogger(FromElementsSourceReader.class);
+
+    protected volatile int numElementsEmitted;
+    protected volatile boolean isRunning = true;
+
+    /** The context of this source reader. */
+    protected SourceReaderContext context;
+
+    protected List<T> elements;
+    protected Counter numRecordInCounter;
+
+    public FromElementsSourceReader(List<T> elements, SourceReaderContext context) {
+        this.context = context;
+        this.numElementsEmitted = 0;
+        this.elements = elements;
+        this.numRecordInCounter = context.metricGroup().getIOMetricGroup().getNumRecordsInCounter();
+    }
+
+    @Override
+    public void start() {}
+
+    @Override
+    public InputStatus pollNext(ReaderOutput<T> output) throws Exception {
+        if (isRunning && numElementsEmitted < elements.size()) {

Review comment:
       `emittedNum` ?

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/source/FromElementsSourceReaderWithSuccessNum.java
##########
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.source;
+
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.api.connector.source.ReaderOutput;
+import org.apache.flink.api.connector.source.SourceReaderContext;
+import org.apache.flink.connector.testframe.source.split.FromElementsSplit;
+import org.apache.flink.core.io.InputStatus;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.List;
+
+import static org.apache.flink.core.io.InputStatus.MORE_AVAILABLE;
+
+/** The reader reads data from a list. */

Review comment:
       Add more note to explain the diff with  `FromElementsSourceReader`, how about the name `FromLimitedElementsSourceReader` and `int limitedNum` ?

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.

Review comment:
       ```suggestion
        * Test connector sink restart from a completed savepoint with a higher parallelism.
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */

Review comment:
       ```suggestion
       /**
        * Test DataStream connector sink.
        *
        * <p>The following tests will create a sink in the external system, generate a collection of test data
        * and write them to this sink by the Flink Job.
        *
        * <p>In order to pass these tests, the number of records produced by Flink need to be equals to
        * the generated test data. And the records in the sink will be compared to the test data by the
        * different semantics. There's no requirement for records order.
        */
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.

Review comment:
       ```suggestion
        * Test connector sink restart from a completed savepoint with a lower parallelism.
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/external/sink/DataStreamSinkV1ExternalContext.java
##########
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.external.sink;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.connector.sink.Sink;
+
+/**
+ * External context for DataStream sinks which is sink version 1.

Review comment:
       BTW, do we have any test for v1 sink.

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+
+        /**
+         * The job should stop after consume a specified number of records. In order to know when
+         * the specified number of records have been consumed, a collect sink is need to be watched.
+         */
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;

Review comment:
       ```suggestion
           String savepointPath;
   ```

##########
File path: flink-test-utils-parent/flink-connector-test-utils/src/main/java/org/apache/flink/connector/testframe/testsuites/SinkTestSuiteBase.java
##########
@@ -0,0 +1,629 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.testframe.testsuites;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.source.Boundedness;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.testframe.environment.TestEnvironment;
+import org.apache.flink.connector.testframe.environment.TestEnvironmentSettings;
+import org.apache.flink.connector.testframe.external.ExternalSystemDataReader;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV1ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.DataStreamSinkV2ExternalContext;
+import org.apache.flink.connector.testframe.external.sink.TestingSinkSettings;
+import org.apache.flink.connector.testframe.junit.extensions.ConnectorTestingExtension;
+import org.apache.flink.connector.testframe.junit.extensions.TestCaseInvocationContextProvider;
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.connector.testframe.utils.CollectIteratorAssertions;
+import org.apache.flink.connector.testframe.utils.MetricQuerier;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.core.execution.SavepointFormatType;
+import org.apache.flink.runtime.metrics.MetricNames;
+import org.apache.flink.streaming.api.CheckpointingMode;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.DataStreamSink;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.collect.CollectResultIterator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperator;
+import org.apache.flink.streaming.api.operators.collect.CollectSinkOperatorFactory;
+import org.apache.flink.streaming.api.operators.collect.CollectStreamSink;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.apache.commons.math3.util.Precision;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.opentest4j.TestAbortedException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_COLLECT_DATA_TIMEOUT;
+import static org.apache.flink.connector.testframe.utils.ConnectorTestConstants.DEFAULT_JOB_STATUS_CHANGE_TIMEOUT;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.terminateJob;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitForJobStatus;
+import static org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition;
+import static org.apache.flink.streaming.api.CheckpointingMode.AT_LEAST_ONCE;
+import static org.apache.flink.streaming.api.CheckpointingMode.EXACTLY_ONCE;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/**
+ * Base class for sink test suite.
+ *
+ * <p>All cases should have well-descriptive JavaDoc, including:
+ *
+ * <ul>
+ *   <li>What's the purpose of this case
+ *   <li>Simple description of how this case works
+ *   <li>Condition to fulfill in order to pass this case
+ *   <li>Requirement of running this case
+ * </ul>
+ */
+@ExtendWith({
+    ConnectorTestingExtension.class,
+    TestLoggerExtension.class,
+    TestCaseInvocationContextProvider.class
+})
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
+@Experimental
+public abstract class SinkTestSuiteBase<T extends Comparable<T>> {
+    private static final Logger LOG = LoggerFactory.getLogger(SinkTestSuiteBase.class);
+
+    // ----------------------------- Basic test cases ---------------------------------
+
+    /**
+     * Test connector data stream sink.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write them to this sink by the Flink Job.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test data stream sink")
+    public void testBasicSink(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Build and execute Flink job
+        StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.enableCheckpointing(50);
+        DataStream<T> dataStream =
+                execEnv.fromCollection(testRecords)
+                        .name("sourceInSinkTest")
+                        .setParallelism(1)
+                        .returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .setParallelism(1)
+                .name("sinkInSinkTest");
+        final JobClient jobClient = execEnv.executeAsync("DataStream Sink Test");
+
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.FINISHED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+        // Check test result
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with the same parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint. After the job has been
+     * running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting from a savepoint")
+    public void testStartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 2);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a higher parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 2 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a higher parallelism 4.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a higher parallelism")
+    public void testScaleUp(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 2, 4);
+    }
+
+    /**
+     * Test connector source restart from a completed savepoint with a lower parallelism.
+     *
+     * <p>This test will create a sink in the external system, generate a collection of test data
+     * and write a half part of them to this sink by the Flink Job with parallelism 4 at first. Then
+     * stop the job, restart the same job from the completed savepoint with a lower parallelism 2.
+     * After the job has been running, write the other part to the sink and compare the result.
+     *
+     * <p>In order to pass this test, the number of records produced by Flink need to be equals to
+     * the generated test data. And the records in the sink will be compared to the test data by the
+     * different semantic. There's no requirement for record order.
+     */
+    @TestTemplate
+    @DisplayName("Test sink restarting with a lower parallelism")
+    public void testScaleDown(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        restartFromSavepoint(testEnv, externalContext, semantic, 4, 2);
+    }
+
+    private void restartFromSavepoint(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic,
+            final int beforeParallelism,
+            final int afterParallelism)
+            throws Exception {
+        // Step 1: Preparation
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        final StreamExecutionEnvironment execEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        execEnv.setRestartStrategy(RestartStrategies.noRestart());
+
+        // Step 2: Generate test data
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // Step 3: Build and execute Flink job
+        int numBeforeSuccess = testRecords.size() / 2;
+        DataStreamSource<T> source =
+                execEnv.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        numBeforeSuccess),
+                                WatermarkStrategy.noWatermarks(),
+                                "beforeRestartSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name("Sink restart test")
+                .setParallelism(beforeParallelism);
+
+        /**
+         * The job should stop after consume a specified number of records. In order to know when
+         * the specified number of records have been consumed, a collect sink is need to be watched.
+         */
+        CollectResultIterator<T> iterator = addCollectSink(source);
+        final JobClient jobClient = execEnv.executeAsync("Restart Test");
+        iterator.setJobClient(jobClient);
+
+        // Step 4: Wait for the expected result and stop Flink job with a savepoint
+        String savepointDir;
+        try {
+            final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());
+            waitForAllTaskRunning(
+                    () ->
+                            queryRestClient.getJobDetails(
+                                    testEnv.getRestEndpoint(), jobClient.getJobID()),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+            waitExpectedSizeData(iterator, numBeforeSuccess);
+
+            savepointDir =
+                    jobClient
+                            .stopWithSavepoint(
+                                    true, testEnv.getCheckpointUri(), SavepointFormatType.CANONICAL)
+                            .get(30, TimeUnit.SECONDS);
+            waitForJobStatus(
+                    jobClient,
+                    Collections.singletonList(JobStatus.FINISHED),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+        } catch (Exception e) {
+            killJob(jobClient);
+            throw e;
+        }
+
+        List<T> target = testRecords.subList(0, numBeforeSuccess);
+        checkResultWithSemantic(
+                externalContext.createSinkDataReader(sinkSettings), target, semantic);
+
+        // Step 4: restart the Flink job with the savepoint
+        final StreamExecutionEnvironment restartEnv =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .setSavepointRestorePath(savepointDir)
+                                .build());
+        restartEnv.enableCheckpointing(50);
+
+        DataStreamSource<T> restartSource =
+                restartEnv
+                        .fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        testRecords.size()),
+                                WatermarkStrategy.noWatermarks(),
+                                "restartSource")
+                        .setParallelism(1);
+
+        DataStream<T> sinkStream = restartSource.returns(externalContext.getProducedType());
+        tryCreateSink(sinkStream, externalContext, sinkSettings).setParallelism(afterParallelism);
+        addCollectSink(restartSource);
+        final JobClient restartJobClient = restartEnv.executeAsync("Restart Test");
+
+        try {
+            // Check the result
+            checkResultWithSemantic(
+                    externalContext.createSinkDataReader(sinkSettings), testRecords, semantic);
+        } finally {
+            killJob(restartJobClient);
+            iterator.close();
+        }
+    }
+
+    /**
+     * Test connector sink metrics.
+     *
+     * <p>This test will create a sink in the external system, generate test data and write them to
+     * the sink via a Flink job. Then read and compare the metrics.
+     *
+     * <p>Now test: numRecordsOut
+     */
+    @TestTemplate
+    @DisplayName("Test sink metrics")
+    public void testMetrics(
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> externalContext,
+            CheckpointingMode semantic)
+            throws Exception {
+        TestingSinkSettings sinkSettings = getTestingSinkSettings(semantic);
+        int parallelism = 1;
+        final List<T> testRecords = generateTestData(sinkSettings, externalContext);
+
+        // make sure use different names when executes multi times
+        String sinkName = "metricTestSink" + testRecords.hashCode();
+        final StreamExecutionEnvironment env =
+                testEnv.createExecutionEnvironment(
+                        TestEnvironmentSettings.builder()
+                                .setConnectorJarPaths(externalContext.getConnectorJarPaths())
+                                .build());
+        env.enableCheckpointing(50);
+
+        DataStreamSource<T> source =
+                env.fromSource(
+                                new FromElementsSource<>(
+                                        Boundedness.CONTINUOUS_UNBOUNDED,
+                                        testRecords,
+                                        testRecords.size()),
+                                WatermarkStrategy.noWatermarks(),
+                                "metricTestSource")
+                        .setParallelism(1);
+
+        DataStream<T> dataStream = source.returns(externalContext.getProducedType());
+        tryCreateSink(dataStream, externalContext, sinkSettings)
+                .name(sinkName)
+                .setParallelism(parallelism);
+        final JobClient jobClient = env.executeAsync("Metrics Test");
+        final MetricQuerier queryRestClient = new MetricQuerier(new Configuration());
+        try {
+            waitForAllTaskRunning(
+                    () ->
+                            queryRestClient.getJobDetails(
+                                    testEnv.getRestEndpoint(), jobClient.getJobID()),
+                    Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+
+            waitUntilCondition(
+                    () -> {
+                        // test metrics
+                        try {
+                            return compareSinkMetrics(
+                                    queryRestClient,
+                                    testEnv,
+                                    externalContext,
+                                    jobClient.getJobID(),
+                                    sinkName,
+                                    testRecords.size());
+                        } catch (Exception e) {
+                            // skip failed assert try
+                            return false;
+                        }
+                    },
+                    Deadline.fromNow(DEFAULT_COLLECT_DATA_TIMEOUT));
+        } finally {
+            // Clean up
+            killJob(jobClient);
+        }
+    }
+
+    // ----------------------------- Helper Functions ---------------------------------
+
+    /**
+     * Generate a set of test records.
+     *
+     * @param testingSinkSettings sink settings
+     * @param externalContext External context
+     * @return Collection of generated test records
+     */
+    protected List<T> generateTestData(
+            TestingSinkSettings testingSinkSettings,
+            DataStreamSinkExternalContext<T> externalContext) {
+        return externalContext.generateTestData(
+                testingSinkSettings, ThreadLocalRandom.current().nextLong());
+    }
+
+    /**
+     * Poll records from the sink.
+     *
+     * @param result Append records to which list
+     * @param reader The sink reader
+     * @param expected The expected list which help to stop polling
+     * @param retryTimes The retry times
+     * @param semantic The semantic
+     * @return Collection of records in the Sink
+     */
+    private List<T> appendResultData(
+            List<T> result,
+            ExternalSystemDataReader<T> reader,
+            List<T> expected,
+            int retryTimes,
+            CheckpointingMode semantic) {
+        long timeoutMs = 1000L;
+        int retryIndex = 0;
+
+        while (retryIndex++ < retryTimes
+                && !checkGetEnoughRecordsWithSemantic(expected, result, semantic)) {
+            result.addAll(reader.poll(Duration.ofMillis(timeoutMs)));
+        }
+        return result;
+    }
+
+    /**
+     * Check whether the polling should stop.
+     *
+     * @param expected The expected list which help to stop polling
+     * @param result The records that have been read
+     * @param semantic The semantic
+     * @return Whether the polling should stop
+     */
+    private boolean checkGetEnoughRecordsWithSemantic(
+            List<T> expected, List<T> result, CheckpointingMode semantic) {
+        checkNotNull(expected);
+        checkNotNull(result);
+        if (EXACTLY_ONCE.equals(semantic)) {
+            return expected.size() <= result.size();
+        } else if (AT_LEAST_ONCE.equals(semantic)) {
+            Set<Integer> matchedIndex = new HashSet<>();
+            for (T record : expected) {
+                int before = matchedIndex.size();
+                for (int i = 0; i < result.size(); i++) {
+                    if (matchedIndex.contains(i)) {
+                        continue;
+                    }
+                    if (record.equals(result.get(i))) {
+                        matchedIndex.add(i);
+                        break;
+                    }
+                }
+                // if not find the record in the result
+                if (before == matchedIndex.size()) {
+                    return false;
+                }
+            }
+            return true;
+        }
+        throw new IllegalStateException(
+                String.format("%s delivery guarantee doesn't support test.", semantic.name()));
+    }
+
+    /**
+     * Compare the test data with the result.
+     *
+     * @param reader the data reader for the sink
+     * @param testData the test data
+     * @param semantic the supported semantic, see {@link CheckpointingMode}
+     */
+    private void checkResultWithSemantic(
+            ExternalSystemDataReader<T> reader, List<T> testData, CheckpointingMode semantic)
+            throws Exception {
+        final ArrayList<T> result = new ArrayList<>();
+        waitUntilCondition(
+                () -> {
+                    appendResultData(result, reader, testData, 30, semantic);
+                    try {
+                        CollectIteratorAssertions.assertThat(sort(result).iterator())
+                                .matchesRecordsFromSource(Arrays.asList(sort(testData)), semantic);
+                        return true;
+                    } catch (Throwable t) {
+                        return false;
+                    }
+                },
+                Deadline.fromNow(DEFAULT_COLLECT_DATA_TIMEOUT));
+    }
+
+    /** Compare the metrics. */
+    private boolean compareSinkMetrics(
+            MetricQuerier metricQuerier,
+            TestEnvironment testEnv,
+            DataStreamSinkExternalContext<T> context,
+            JobID jobId,
+            String sinkName,
+            long allRecordSize)
+            throws Exception {
+        double sumNumRecordsOut =
+                metricQuerier.getAggregatedMetricsByRestAPI(
+                        testEnv.getRestEndpoint(),
+                        jobId,
+                        sinkName,
+                        MetricNames.IO_NUM_RECORDS_OUT,
+                        getSinkMetricFilter(context));
+        return Precision.equals(allRecordSize, sumNumRecordsOut);
+    }
+
+    /** Sort the list. */
+    private List<T> sort(List<T> list) {
+        return list.stream().sorted().collect(Collectors.toList());
+    }
+
+    private TestingSinkSettings getTestingSinkSettings(CheckpointingMode checkpointingMode) {
+        return TestingSinkSettings.builder().setCheckpointingMode(checkpointingMode).build();
+    }
+
+    private void killJob(JobClient jobClient) throws Exception {
+        terminateJob(jobClient);
+        waitForJobStatus(
+                jobClient,
+                Collections.singletonList(JobStatus.CANCELED),
+                Deadline.fromNow(DEFAULT_JOB_STATUS_CHANGE_TIMEOUT));
+    }
+
+    private DataStreamSink<T> tryCreateSink(
+            DataStream<T> dataStream,
+            DataStreamSinkExternalContext<T> context,
+            TestingSinkSettings sinkSettings) {
+        try {
+            if (context instanceof DataStreamSinkV1ExternalContext) {
+                org.apache.flink.api.connector.sink.Sink<T, ?, ?, ?> sinkV1 =
+                        ((DataStreamSinkV1ExternalContext<T>) context).createSink(sinkSettings);
+                return dataStream.sinkTo(sinkV1);
+            } else if (context instanceof DataStreamSinkV2ExternalContext) {
+                Sink<T> sinkV2 =
+                        ((DataStreamSinkV2ExternalContext<T>) context).createSink(sinkSettings);
+                return dataStream.sinkTo(sinkV2);
+            } else {
+                throw new IllegalArgumentException(
+                        String.format("Get unexpected sink context: %s", context.getClass()));

Review comment:
       hint, please use the log pattern: The supported are ... , but actual is ..




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31466",
       "triggerID" : "0b52c13271c485e5a6776a1aca81c753d0d4bbc4",
       "triggerType" : "PUSH"
     }, {
       "hash" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31477",
       "triggerID" : "da588603a577a2b26bcf90fcd38653f7ec8a3a74",
       "triggerType" : "PUSH"
     }, {
       "hash" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31505",
       "triggerID" : "64e30250dc0ad0d011b8d3d2fe2f15ce8e30906e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31518",
       "triggerID" : "c1619577228a3fde9684f2c85965d6d1f76addbf",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31529",
       "triggerID" : "1076a64c9f916fe9d8a23d38aafbd1f359b038d9",
       "triggerType" : "PUSH"
     }, {
       "hash" : "ebca9a1e955205c53ea919b863c9550642bc73db",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31536",
       "triggerID" : "ebca9a1e955205c53ea919b863c9550642bc73db",
       "triggerType" : "PUSH"
     }, {
       "hash" : "cc23b8d007ad7df80d90db437789470502b78f53",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "cc23b8d007ad7df80d90db437789470502b78f53",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 1076a64c9f916fe9d8a23d38aafbd1f359b038d9 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31529) 
   * ebca9a1e955205c53ea919b863c9550642bc73db Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31536) 
   * cc23b8d007ad7df80d90db437789470502b78f53 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30474",
       "triggerID" : "d5e64bbb6debad7940d7ca05729ce57628127225",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31339",
       "triggerID" : "0034fb25f7fbbbcf302fb18626d7983f32732ca5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31375",
       "triggerID" : "b8513c81bd9bc1e30efa4ea1fae35d30fd33472c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405",
       "triggerID" : "bc9871b19a43fd0b99e1b53336534d59612a119e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463",
       "triggerID" : "35d869286d16c6d306c9059cf5d3af339934c229",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * bc9871b19a43fd0b99e1b53336534d59612a119e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31405) 
   * 35d869286d16c6d306c9059cf5d3af339934c229 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31463) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] ruanhang1993 commented on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
ruanhang1993 commented on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1041208998


   > 
   This is closed in the commit https://github.com/apache/flink/commit/57e3f03ccd719ed772c983ba335517d95f8f3e6a. 
   You could find it in the events at this PR. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] ruanhang1993 commented on a change in pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
ruanhang1993 commented on a change in pull request #18496:
URL: https://github.com/apache/flink/pull/18496#discussion_r808051755



##########
File path: flink-test-utils-parent/flink-connector-test-utils/pom.xml
##########
@@ -95,4 +95,30 @@
 			<scope>compile</scope>
 		</dependency>
 	</dependencies>
+
+	<build>
+		<plugins>
+			<plugin>
+				<groupId>org.apache.maven.plugins</groupId>
+				<artifactId>maven-shade-plugin</artifactId>
+				<executions>
+					<execution>
+						<phase>package</phase>
+						<goals>
+							<goal>shade</goal>
+						</goals>
+						<configuration>
+							<shadedArtifactAttached>true</shadedArtifactAttached>
+							<shadedClassifierName>source</shadedClassifierName>
+							<artifactSet>
+								<includes>
+									<include>**/connector/testframe/source/**</include>

Review comment:
       This separate jar is used in the e2e sink tests. It only contains the new [FromElementsSource](https://github.com/apache/flink/pull/18496/files#diff-dbcd767752498ef1c894717f126ffd9008a3c5b20fd3f9a3c6ffefda95cc93d2) which is needed in the sink tests.
   The normal jar will contain some other dependencies. I am afraid that it will cause some conflicts. So I create a  separate jar.
   
   As @PatrickRen mentioned, this source could be moved to `flink-streaming-java` in another PR. Then we could get rid of this usage.
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469",
       "triggerID" : "d12c135ebf7dcc56e9c26695ecc2a2c3f4853176",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * e3a0766cb731672fd5be68b79bf380c8577ea068 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30369) 
   * d12c135ebf7dcc56e9c26695ecc2a2c3f4853176 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30469) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "909c155557a856976df8b5be1729553873ecbd4b",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114",
       "triggerID" : "909c155557a856976df8b5be1729553873ecbd4b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "e3a0766cb731672fd5be68b79bf380c8577ea068",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 909c155557a856976df8b5be1729553873ecbd4b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30114) 
   * e3a0766cb731672fd5be68b79bf380c8577ea068 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #18496: [FLINK-25289][tests] add sink test suite in connector testframe

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #18496:
URL: https://github.com/apache/flink/pull/18496#issuecomment-1020867632






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org