You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2022/01/06 16:10:56 UTC

[GitHub] [flink] slinkydeveloper opened a new pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

slinkydeveloper opened a new pull request #18290:
URL: https://github.com/apache/flink/pull/18290


   ## What is the purpose of the change
   
   This PR updates format and table factories to define which options can be merged with the options from the catalog table, in case PLAN_RESTORE_CATALOG_OBJECTS == ALL
   
   ## Brief change log
   
   * Update forwarded options for table and format factories
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): no
     - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no
     - The serializers: no
     - The runtime per-record code paths (performance sensitive): no
     - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
     - The S3 file system connector: no
   
   ## Documentation
   
     - Does this pull request introduce a new feature? no
     - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29347",
       "triggerID" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29368",
       "triggerID" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35ffc4d144540ac24d28499395079ff34d32f4eb",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29416",
       "triggerID" : "35ffc4d144540ac24d28499395079ff34d32f4eb",
       "triggerType" : "PUSH"
     }, {
       "hash" : "044804a54474295d18f075e5649e1f551171845a",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30020",
       "triggerID" : "044804a54474295d18f075e5649e1f551171845a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 35ffc4d144540ac24d28499395079ff34d32f4eb Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29416) 
   * 044804a54474295d18f075e5649e1f551171845a Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30020) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 57b0ecb6e3ff211665124963a4e3f35a5cd8929b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059) 
   * f2603c0005990aa622277c635475d95ee2c049a7 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346) 
   * 8480af8298b5fa73b3a47dc97ebea031400e660a UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
slinkydeveloper commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r784639844



##########
File path: docs/content/docs/connectors/table/hbase.md
##########
@@ -103,34 +105,39 @@ Connector Options
     <tr>
       <td><h5>table-name</h5></td>
       <td>required</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The name of HBase table to connect. By default, the table is in 'default' namespace. To assign the table a specified namespace you need to use 'namespace:table'.</td>
     </tr>
     <tr>
       <td><h5>zookeeper.quorum</h5></td>
       <td>required</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The HBase Zookeeper quorum.</td>
     </tr>
     <tr>
       <td><h5>zookeeper.znode.parent</h5></td>
       <td>optional</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">/hbase</td>
       <td>String</td>
       <td>The root dir in Zookeeper for HBase cluster.</td>
     </tr>
     <tr>
       <td><h5>null-string-literal</h5></td>
       <td>optional</td>
+      <td>yes</td>

Review comment:
       Why? Seems safe to me, is this affecting in any way the final `Transformation` topology?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
slinkydeveloper commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r784639062



##########
File path: docs/content/docs/connectors/table/kinesis.md
##########
@@ -255,349 +271,399 @@ Connector Options
     <tr>
       <td><h5>scan.stream.initpos</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">LATEST</td>
       <td>String</td>
       <td>Initial position to be used when reading from the table. See <a href="#start-reading-position">Start Reading Position</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.initpos-timestamp</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The initial timestamp to start reading Kinesis stream from (when <code>scan.stream.initpos</code> is AT_TIMESTAMP). See <a href="#start-reading-position">Start Reading Position</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.initpos-timestamp-format</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">yyyy-MM-dd'T'HH:mm:ss.SSSXXX</td>
       <td>String</td>
       <td>The date format of initial timestamp to start reading Kinesis stream from (when <code>scan.stream.initpos</code> is AT_TIMESTAMP). See <a href="#start-reading-position">Start Reading Position</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.recordpublisher</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">POLLING</td>
       <td>String</td>
       <td>The <code>RecordPublisher</code> type to use for sources. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.consumername</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The name of the EFO consumer to register with KDS. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.registration</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">LAZY</td>
       <td>String</td>
       <td>Determine how and when consumer de-/registration is performed (LAZY|EAGER|NONE). See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.consumerarn</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The prefix of consumer ARN for a given stream. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.http-client.max-concurrency</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10000</td>
       <td>Integer</td>
       <td>Maximum number of allowed concurrent requests for the EFO client. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">50</td>
       <td>Integer</td>
       <td>The maximum number of <code>describeStream</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>describeStream</code> attempt (for consuming from DynamoDB streams).</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">5000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds)  between each <code>describeStream</code> attempt (for consuming from DynamoDB streams).</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>describeStream</code> attempt (for consuming from DynamoDB streams).</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>listShards</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>listShards</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">5000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>listShards</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>listShards</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">50</td>
       <td>Integer</td>
       <td>The maximum number of <code>describeStreamConsumer</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>describeStreamConsumer</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">5000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>describeStreamConsumer</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>describeStreamConsumer</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>registerStream</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.timeout</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">60</td>
       <td>Integer</td>
       <td>The maximum time in seconds to wait for a stream consumer to become active before giving up.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">500</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>registerStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>registerStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>registerStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>deregisterStream</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.timeout</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">60</td>
       <td>Integer</td>
       <td>The maximum time in seconds to wait for a stream consumer to deregister before giving up.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">500</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>deregisterStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>deregisterStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>deregisterStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>subscribeToShard</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>subscribeToShard</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>subscribeToShard</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>subscribeToShard</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.maxrecordcount</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10000</td>
       <td>Integer</td>
       <td>The maximum number of records to try to get each time we fetch records from a AWS Kinesis shard.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">3</td>
       <td>Integer</td>
       <td>The maximum number of <code>getRecords</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">300</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between <code>getRecords</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between <code>getRecords</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>getRecords</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.intervalmillis</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">200</td>
       <td>Long</td>
       <td>The interval (in milliseconds) between each <code>getRecords</code> request to a AWS Kinesis shard in milliseconds.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">3</td>
       <td>Integer</td>
       <td>The maximum number of <code>getShardIterator</code> attempts if we get ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">300</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between <code>getShardIterator</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between <code>getShardIterator</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>getShardIterator</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.discovery.intervalmillis</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10000</td>
       <td>Integer</td>
       <td>The interval between each attempt to discover new shards.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.adaptivereads</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">false</td>
       <td>Boolean</td>
       <td>The config to turn on adaptive reads from a shard. See the <code>AdaptivePollingRecordPublisher</code> documentation for details.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.idle.interval</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">-1</td>
       <td>Long</td>
       <td>The interval (in milliseconds) after which to consider a shard idle for purposes of watermark generation. A positive value will allow the watermark to progress even when some shards don't receive new records.</td>
     </tr>
     <tr>
       <td><h5>scan.watermark.sync.interval</h5></td>
       <td>optional</td>
+      <td>no</td>

Review comment:
       Yep the connect here needs to be updated

##########
File path: docs/content/docs/connectors/table/kinesis.md
##########
@@ -136,34 +137,39 @@ Connector Options
     <tr>
       <td><h5>connector</h5></td>
       <td>required</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Specify what connector to use. For Kinesis use <code>'kinesis'</code>.</td>
     </tr>
     <tr>
       <td><h5>stream</h5></td>
       <td>required</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Name of the Kinesis data stream backing this table.</td>
     </tr>
     <tr>
       <td><h5>format</h5></td>
       <td>required</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The format used to deserialize and serialize Kinesis data stream records. See <a href="#data-type-mapping">Data Type Mapping</a> for details.</td>
     </tr>
     <tr>
       <td><h5>aws.region</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The AWS region where the stream is defined. Either this or <code>aws.endpoint</code> are required.</td>
     </tr>
     <tr>
       <td><h5>aws.endpoint</h5></td>
       <td>optional</td>
+      <td>no</td>

Review comment:
       https://github.com/apache/flink/pull/18290#discussion_r784639062
   

##########
File path: docs/content/docs/connectors/table/kinesis.md
##########
@@ -136,34 +137,39 @@ Connector Options
     <tr>
       <td><h5>connector</h5></td>
       <td>required</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Specify what connector to use. For Kinesis use <code>'kinesis'</code>.</td>
     </tr>
     <tr>
       <td><h5>stream</h5></td>
       <td>required</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Name of the Kinesis data stream backing this table.</td>
     </tr>
     <tr>
       <td><h5>format</h5></td>
       <td>required</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The format used to deserialize and serialize Kinesis data stream records. See <a href="#data-type-mapping">Data Type Mapping</a> for details.</td>
     </tr>
     <tr>
       <td><h5>aws.region</h5></td>
       <td>optional</td>
+      <td>no</td>

Review comment:
       https://github.com/apache/flink/pull/18290#discussion_r784639062

##########
File path: docs/content/docs/connectors/table/kafka.md
##########
@@ -179,50 +179,57 @@ Connector Options
     <tr>
       <th class="text-left" style="width: 25%">Option</th>
       <th class="text-center" style="width: 8%">Required</th>
+      <th class="text-center" style="width: 8%">Forwarded</th>
       <th class="text-center" style="width: 7%">Default</th>
       <th class="text-center" style="width: 10%">Type</th>
-      <th class="text-center" style="width: 50%">Description</th>
+      <th class="text-center" style="width: 42%">Description</th>
     </tr>
     </thead>
     <tbody>
     <tr>
       <td><h5>connector</h5></td>
       <td>required</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Specify what connector to use, for Kafka use <code>'kafka'</code>.</td>
     </tr>
     <tr>
       <td><h5>topic</h5></td>
       <td>required for sink</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Topic name(s) to read data from when the table is used as source. It also supports topic list for source by separating topic by semicolon like <code>'topic-1;topic-2'</code>. Note, only one of "topic-pattern" and "topic" can be specified for sources. When the table is used as sink, the topic name is the topic to write data to. Note topic list is not supported for sinks.</td>
     </tr>
     <tr>
       <td><h5>topic-pattern</h5></td>
       <td>optional</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The regular expression for a pattern of topic names to read from. All topics with names that match the specified regular expression will be subscribed by the consumer when the job starts running. Note, only one of "topic-pattern" and "topic" can be specified for sources.</td>
     </tr>
     <tr>
       <td><h5>properties.bootstrap.servers</h5></td>
       <td>required</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Comma separated list of Kafka brokers.</td>
     </tr>
     <tr>
       <td><h5>properties.group.id</h5></td>
       <td>optional for source, not applicable for sink</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The id of the consumer group for Kafka source. If group ID is not specified, an automatically generated id "KafkaSource-{tableIdentifier}" will be used.</td>
     </tr>
     <tr>
       <td><h5>properties.*</h5></td>
       <td>optional</td>
+      <td>no</td>

Review comment:
       https://github.com/apache/flink/pull/18290#discussion_r784639062

##########
File path: docs/content/docs/connectors/table/kinesis.md
##########
@@ -255,349 +271,399 @@ Connector Options
     <tr>
       <td><h5>scan.stream.initpos</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">LATEST</td>
       <td>String</td>
       <td>Initial position to be used when reading from the table. See <a href="#start-reading-position">Start Reading Position</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.initpos-timestamp</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The initial timestamp to start reading Kinesis stream from (when <code>scan.stream.initpos</code> is AT_TIMESTAMP). See <a href="#start-reading-position">Start Reading Position</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.initpos-timestamp-format</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">yyyy-MM-dd'T'HH:mm:ss.SSSXXX</td>
       <td>String</td>
       <td>The date format of initial timestamp to start reading Kinesis stream from (when <code>scan.stream.initpos</code> is AT_TIMESTAMP). See <a href="#start-reading-position">Start Reading Position</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.recordpublisher</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">POLLING</td>
       <td>String</td>
       <td>The <code>RecordPublisher</code> type to use for sources. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.consumername</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The name of the EFO consumer to register with KDS. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.registration</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">LAZY</td>
       <td>String</td>
       <td>Determine how and when consumer de-/registration is performed (LAZY|EAGER|NONE). See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.consumerarn</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The prefix of consumer ARN for a given stream. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.http-client.max-concurrency</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10000</td>
       <td>Integer</td>
       <td>Maximum number of allowed concurrent requests for the EFO client. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">50</td>
       <td>Integer</td>
       <td>The maximum number of <code>describeStream</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>describeStream</code> attempt (for consuming from DynamoDB streams).</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">5000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds)  between each <code>describeStream</code> attempt (for consuming from DynamoDB streams).</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>describeStream</code> attempt (for consuming from DynamoDB streams).</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>listShards</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>listShards</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">5000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>listShards</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>listShards</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">50</td>
       <td>Integer</td>
       <td>The maximum number of <code>describeStreamConsumer</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>describeStreamConsumer</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">5000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>describeStreamConsumer</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>describeStreamConsumer</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>registerStream</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.timeout</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">60</td>
       <td>Integer</td>
       <td>The maximum time in seconds to wait for a stream consumer to become active before giving up.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">500</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>registerStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>registerStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>registerStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>deregisterStream</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.timeout</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">60</td>
       <td>Integer</td>
       <td>The maximum time in seconds to wait for a stream consumer to deregister before giving up.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">500</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>deregisterStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>deregisterStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>deregisterStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>subscribeToShard</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>subscribeToShard</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>subscribeToShard</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>subscribeToShard</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.maxrecordcount</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10000</td>
       <td>Integer</td>
       <td>The maximum number of records to try to get each time we fetch records from a AWS Kinesis shard.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">3</td>
       <td>Integer</td>
       <td>The maximum number of <code>getRecords</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">300</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between <code>getRecords</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between <code>getRecords</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>getRecords</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.intervalmillis</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">200</td>
       <td>Long</td>
       <td>The interval (in milliseconds) between each <code>getRecords</code> request to a AWS Kinesis shard in milliseconds.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">3</td>
       <td>Integer</td>
       <td>The maximum number of <code>getShardIterator</code> attempts if we get ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">300</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between <code>getShardIterator</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between <code>getShardIterator</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>getShardIterator</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.discovery.intervalmillis</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10000</td>
       <td>Integer</td>
       <td>The interval between each attempt to discover new shards.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.adaptivereads</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">false</td>
       <td>Boolean</td>
       <td>The config to turn on adaptive reads from a shard. See the <code>AdaptivePollingRecordPublisher</code> documentation for details.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.idle.interval</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">-1</td>
       <td>Long</td>
       <td>The interval (in milliseconds) after which to consider a shard idle for purposes of watermark generation. A positive value will allow the watermark to progress even when some shards don't receive new records.</td>
     </tr>
     <tr>
       <td><h5>scan.watermark.sync.interval</h5></td>
       <td>optional</td>
+      <td>no</td>

Review comment:
       Yep the connect here needs to be updated to properly define the `ConfigOption` as map type




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
slinkydeveloper commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r784001646



##########
File path: docs/content/docs/connectors/table/formats/json.md
##########
@@ -69,29 +69,33 @@ Format Options
       <tr>
         <th class="text-left" style="width: 25%">Option</th>
         <th class="text-center" style="width: 8%">Required</th>
+        <th class="text-center" style="width: 8%">Forwarded</th>
         <th class="text-center" style="width: 7%">Default</th>
         <th class="text-center" style="width: 10%">Type</th>
-        <th class="text-center" style="width: 50%">Description</th>
+        <th class="text-center" style="width: 42%">Description</th>
       </tr>
     </thead>
     <tbody>
     <tr>
       <td><h5>format</h5></td>
       <td>required</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Specify what format to use, here should be <code>'json'</code>.</td>
     </tr>
     <tr>
       <td><h5>json.fail-on-missing-field</h5></td>
       <td>optional</td>
+      <td>no</td>

Review comment:
       ignore parse errors seems safe? I don't have a strong opinion




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] twalthr commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
twalthr commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r784048899



##########
File path: docs/content/docs/connectors/table/kinesis.md
##########
@@ -255,349 +271,399 @@ Connector Options
     <tr>
       <td><h5>scan.stream.initpos</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">LATEST</td>
       <td>String</td>
       <td>Initial position to be used when reading from the table. See <a href="#start-reading-position">Start Reading Position</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.initpos-timestamp</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The initial timestamp to start reading Kinesis stream from (when <code>scan.stream.initpos</code> is AT_TIMESTAMP). See <a href="#start-reading-position">Start Reading Position</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.initpos-timestamp-format</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">yyyy-MM-dd'T'HH:mm:ss.SSSXXX</td>
       <td>String</td>
       <td>The date format of initial timestamp to start reading Kinesis stream from (when <code>scan.stream.initpos</code> is AT_TIMESTAMP). See <a href="#start-reading-position">Start Reading Position</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.recordpublisher</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">POLLING</td>
       <td>String</td>
       <td>The <code>RecordPublisher</code> type to use for sources. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.consumername</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The name of the EFO consumer to register with KDS. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.registration</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">LAZY</td>
       <td>String</td>
       <td>Determine how and when consumer de-/registration is performed (LAZY|EAGER|NONE). See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.consumerarn</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The prefix of consumer ARN for a given stream. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.http-client.max-concurrency</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10000</td>
       <td>Integer</td>
       <td>Maximum number of allowed concurrent requests for the EFO client. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">50</td>
       <td>Integer</td>
       <td>The maximum number of <code>describeStream</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>describeStream</code> attempt (for consuming from DynamoDB streams).</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">5000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds)  between each <code>describeStream</code> attempt (for consuming from DynamoDB streams).</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>describeStream</code> attempt (for consuming from DynamoDB streams).</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>listShards</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>listShards</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">5000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>listShards</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>listShards</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">50</td>
       <td>Integer</td>
       <td>The maximum number of <code>describeStreamConsumer</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>describeStreamConsumer</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">5000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>describeStreamConsumer</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>describeStreamConsumer</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>registerStream</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.timeout</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">60</td>
       <td>Integer</td>
       <td>The maximum time in seconds to wait for a stream consumer to become active before giving up.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">500</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>registerStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>registerStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>registerStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>deregisterStream</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.timeout</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">60</td>
       <td>Integer</td>
       <td>The maximum time in seconds to wait for a stream consumer to deregister before giving up.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">500</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>deregisterStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>deregisterStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>deregisterStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>subscribeToShard</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>subscribeToShard</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>subscribeToShard</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>subscribeToShard</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.maxrecordcount</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10000</td>
       <td>Integer</td>
       <td>The maximum number of records to try to get each time we fetch records from a AWS Kinesis shard.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">3</td>
       <td>Integer</td>
       <td>The maximum number of <code>getRecords</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">300</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between <code>getRecords</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between <code>getRecords</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>getRecords</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.intervalmillis</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">200</td>
       <td>Long</td>
       <td>The interval (in milliseconds) between each <code>getRecords</code> request to a AWS Kinesis shard in milliseconds.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">3</td>
       <td>Integer</td>
       <td>The maximum number of <code>getShardIterator</code> attempts if we get ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">300</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between <code>getShardIterator</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between <code>getShardIterator</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>getShardIterator</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.discovery.intervalmillis</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10000</td>
       <td>Integer</td>
       <td>The interval between each attempt to discover new shards.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.adaptivereads</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">false</td>
       <td>Boolean</td>
       <td>The config to turn on adaptive reads from a shard. See the <code>AdaptivePollingRecordPublisher</code> documentation for details.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.idle.interval</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">-1</td>
       <td>Long</td>
       <td>The interval (in milliseconds) after which to consider a shard idle for purposes of watermark generation. A positive value will allow the watermark to progress even when some shards don't receive new records.</td>
     </tr>
     <tr>
       <td><h5>scan.watermark.sync.interval</h5></td>
       <td>optional</td>
+      <td>no</td>

Review comment:
       Because the connector does not use `ConfigOptions` consistently, right @slinkydeveloper?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] JingGe commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
JingGe commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r788657052



##########
File path: flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/HBase2DynamicTableFactory.java
##########
@@ -136,4 +133,20 @@ public String factoryIdentifier() {
         set.add(LOOKUP_MAX_RETRIES);
         return set;
     }
+
+    @Override
+    public Set<ConfigOption<?>> forwardOptions() {

Review comment:
       Now I got your point. We had content-out-of-synch issue previously and there is a [ticket](https://issues.apache.org/jira/browse/FLINK-25506) for doing it, at least for HBase




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 57b0ecb6e3ff211665124963a4e3f35a5cd8929b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059) 
   * f2603c0005990aa622277c635475d95ee2c049a7 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346) 
   * 8480af8298b5fa73b3a47dc97ebea031400e660a UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29347",
       "triggerID" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29368",
       "triggerID" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35ffc4d144540ac24d28499395079ff34d32f4eb",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29416",
       "triggerID" : "35ffc4d144540ac24d28499395079ff34d32f4eb",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0d8e419c43838e6d85155feadee7e096e3990e0d Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29368) 
   * 35ffc4d144540ac24d28499395079ff34d32f4eb Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29416) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
slinkydeveloper commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r784666452



##########
File path: docs/content/docs/connectors/table/elasticsearch.md
##########
@@ -67,15 +67,17 @@ Connector Options
       <tr>
         <th class="text-left" style="width: 25%">Option</th>
         <th class="text-center" style="width: 8%">Required</th>
+        <th class="text-center" style="width: 8%">Forwarded</th>

Review comment:
       What about `restoreable`?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
slinkydeveloper commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r785949724



##########
File path: flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/HBase2DynamicTableFactory.java
##########
@@ -136,4 +133,20 @@ public String factoryIdentifier() {
         set.add(LOOKUP_MAX_RETRIES);
         return set;
     }
+
+    @Override
+    public Set<ConfigOption<?>> forwardOptions() {

Review comment:
       Exactly, but we still need to adapt it to our `Factory` interface.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29347",
       "triggerID" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 8480af8298b5fa73b3a47dc97ebea031400e660a Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29347) 
   * 0d8e419c43838e6d85155feadee7e096e3990e0d UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
slinkydeveloper commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r784645598



##########
File path: docs/content/docs/connectors/table/kinesis.md
##########
@@ -178,69 +184,79 @@ Connector Options
     <tr>
       <td><h5>aws.credentials.provider</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">AUTO</td>
       <td>String</td>
       <td>A credentials provider to use when authenticating against the Kinesis endpoint. See <a href="#authentication">Authentication</a> for details.</td>
     </tr>
     <tr>
 	  <td><h5>aws.credentials.basic.accesskeyid</h5></td>
 	  <td>optional</td>
+      <td>no</td>
 	  <td style="word-wrap: break-word;">(none)</td>
 	  <td>String</td>
 	  <td>The AWS access key ID to use when setting credentials provider type to BASIC.</td>
     </tr>
     <tr>
 	  <td><h5>aws.credentials.basic.secretkey</h5></td>
 	  <td>optional</td>
+      <td>no</td>
 	  <td style="word-wrap: break-word;">(none)</td>
 	  <td>String</td>
 	  <td>The AWS secret key to use when setting credentials provider type to BASIC.</td>
     </tr>
     <tr>
 	  <td><h5>aws.credentials.profile.path</h5></td>
 	  <td>optional</td>
+      <td>no</td>
 	  <td style="word-wrap: break-word;">(none)</td>
 	  <td>String</td>
 	  <td>Optional configuration for profile path if credential provider type is set to be PROFILE.</td>
     </tr>
     <tr>
 	  <td><h5>aws.credentials.profile.name</h5></td>
 	  <td>optional</td>
+      <td>no</td>
 	  <td style="word-wrap: break-word;">(none)</td>
 	  <td>String</td>
 	  <td>Optional configuration for profile name if credential provider type is set to be PROFILE.</td>
     </tr>
     <tr>
 	  <td><h5>aws.credentials.role.arn</h5></td>
 	  <td>optional</td>
+      <td>no</td>
 	  <td style="word-wrap: break-word;">(none)</td>
 	  <td>String</td>
 	  <td>The role ARN to use when credential provider type is set to ASSUME_ROLE or WEB_IDENTITY_TOKEN.</td>
     </tr>
     <tr>
 	  <td><h5>aws.credentials.role.sessionName</h5></td>
 	  <td>optional</td>
+      <td>no</td>
 	  <td style="word-wrap: break-word;">(none)</td>
 	  <td>String</td>
 	  <td>The role session name to use when credential provider type is set to ASSUME_ROLE or WEB_IDENTITY_TOKEN.</td>
     </tr>
     <tr>
 	  <td><h5>aws.credentials.role.externalId</h5></td>
 	  <td>optional</td>
+      <td>no</td>

Review comment:
       Wrong formatting? On the IDE everything is shown correctly, perhaps a GH bug?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
slinkydeveloper commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r784640693



##########
File path: docs/content/docs/connectors/table/elasticsearch.md
##########
@@ -67,15 +67,17 @@ Connector Options
       <tr>
         <th class="text-left" style="width: 25%">Option</th>
         <th class="text-center" style="width: 8%">Required</th>
+        <th class="text-center" style="width: 8%">Forwarded</th>

Review comment:
       Any ideas? Can't really think of anything better TBH. @twalthr ?
   
   At some point we'll also add a proper documentation page in SQL to explain the upgrade story, and we can link it directly from the header of this table 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] twalthr commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
twalthr commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r784666001



##########
File path: docs/content/docs/connectors/table/filesystem.md
##########
@@ -325,33 +347,43 @@ Time extractors define extracting time from partition values.
 <table class="table table-bordered">
   <thead>
     <tr>
-        <th class="text-left" style="width: 20%">Key</th>
-        <th class="text-left" style="width: 15%">Default</th>
+        <th class="text-left" style="width: 25%">Option</th>
+        <th class="text-left" style="width: 8%">Required</th>
+        <th class="text-left" style="width: 8%">Forwarded</th>
+        <th class="text-left" style="width: 7%">Default</th>
         <th class="text-left" style="width: 10%">Type</th>
-        <th class="text-left" style="width: 55%">Description</th>
+        <th class="text-left" style="width: 42%">Description</th>
     </tr>
   </thead>
   <tbody>
     <tr>
         <td><h5>partition.time-extractor.kind</h5></td>
+        <td>optional</td>
+        <td>yes</td>
         <td style="word-wrap: break-word;">default</td>
         <td>String</td>
         <td>Time extractor to extract time from partition values. Support default and custom. For default, can configure timestamp pattern\formatter. For custom, should configure extractor class.</td>
     </tr>
     <tr>
         <td><h5>partition.time-extractor.class</h5></td>
+        <td>optional</td>
+        <td>yes</td>

Review comment:
       I was also skeptical about all partition options. Maybe let's be conservative in the first version? The most important options are connection related options. Which we actually don't even allow in the case of Kafka :(




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 57b0ecb6e3ff211665124963a4e3f35a5cd8929b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059) 
   * f2603c0005990aa622277c635475d95ee2c049a7 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346) 
   * 8480af8298b5fa73b3a47dc97ebea031400e660a UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "CANCELED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29347",
       "triggerID" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * f2603c0005990aa622277c635475d95ee2c049a7 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346) 
   * 8480af8298b5fa73b3a47dc97ebea031400e660a Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29347) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
slinkydeveloper commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r788508629



##########
File path: flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/HBase2DynamicTableFactory.java
##########
@@ -136,4 +133,20 @@ public String factoryIdentifier() {
         set.add(LOOKUP_MAX_RETRIES);
         return set;
     }
+
+    @Override
+    public Set<ConfigOption<?>> forwardOptions() {

Review comment:
       How is this PR going to break things? We don't have a working `ConfigOptionsDocGenerator` for table factories now, nor we have anything else enabled for it. Now everything is manual.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
slinkydeveloper commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r788508629



##########
File path: flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/HBase2DynamicTableFactory.java
##########
@@ -136,4 +133,20 @@ public String factoryIdentifier() {
         set.add(LOOKUP_MAX_RETRIES);
         return set;
     }
+
+    @Override
+    public Set<ConfigOption<?>> forwardOptions() {

Review comment:
       How is this PR going to break things? We don't have a working `ConfigOptionsDocGenerator` for table factories now.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
slinkydeveloper commented on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1011309403


   @twalthr updated the docs, this PR is ready for another pass


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 57b0ecb6e3ff211665124963a4e3f35a5cd8929b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059) 
   * f2603c0005990aa622277c635475d95ee2c049a7 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] JingGe commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
JingGe commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r785234393



##########
File path: docs/content/docs/connectors/table/elasticsearch.md
##########
@@ -67,15 +67,17 @@ Connector Options
       <tr>
         <th class="text-left" style="width: 25%">Option</th>
         <th class="text-center" style="width: 8%">Required</th>
+        <th class="text-center" style="width: 8%">Forwarded</th>

Review comment:
       migratable? or just make it clearer: "forward to json plan"?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29347",
       "triggerID" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 8480af8298b5fa73b3a47dc97ebea031400e660a Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29347) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] JingGe commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
JingGe commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r785232372



##########
File path: flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/HBase2DynamicTableFactory.java
##########
@@ -136,4 +133,20 @@ public String factoryIdentifier() {
         set.add(LOOKUP_MAX_RETRIES);
         return set;
     }
+
+    @Override
+    public Set<ConfigOption<?>> forwardOptions() {

Review comment:
       ConnectorOptions will be used to generate the docs above via hugo. It might be better to extend the forward info there, for HBase i.e. HBaseConnectorOptions. This is a common issue that has impact on all connectors.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
slinkydeveloper commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r785724720



##########
File path: flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/HBase2DynamicTableFactory.java
##########
@@ -136,4 +133,20 @@ public String factoryIdentifier() {
         set.add(LOOKUP_MAX_RETRIES);
         return set;
     }
+
+    @Override
+    public Set<ConfigOption<?>> forwardOptions() {

Review comment:
       But in case of table factories, the docs generator will use the `Factory` interface anyway. We've paved the idea of extending `ConfigOption`, but it was way too complicated and impactful, hence we ended up with this method 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] JingGe commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
JingGe commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r785797124



##########
File path: flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/HBase2DynamicTableFactory.java
##########
@@ -136,4 +133,20 @@ public String factoryIdentifier() {
         set.add(LOOKUP_MAX_RETRIES);
         return set;
     }
+
+    @Override
+    public Set<ConfigOption<?>> forwardOptions() {

Review comment:
       Which doc generator and `Factory` interface do you refer to? Afaik, the `ConfigOptionsDocGenerator` only works with `ConfigOption`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] JingGe commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
JingGe commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r785797124



##########
File path: flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/HBase2DynamicTableFactory.java
##########
@@ -136,4 +133,20 @@ public String factoryIdentifier() {
         set.add(LOOKUP_MAX_RETRIES);
         return set;
     }
+
+    @Override
+    public Set<ConfigOption<?>> forwardOptions() {

Review comment:
       Which doc generator and Factory interface do you refer to? Afaik, the `ConfigOptionsDocGenerator` only works with `ConfigOption`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
slinkydeveloper commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r783263646



##########
File path: flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/table/FileSystemTableFactory.java
##########
@@ -79,9 +86,27 @@ public DynamicTableSource createDynamicTableSource(Context context) {
     @Override
     public DynamicTableSink createDynamicTableSink(Context context) {
         FactoryUtil.TableFactoryHelper helper = FactoryUtil.createTableFactoryHelper(this, context);
+        helper.forwardOptions(
+                FileSystemConnectorOptions.PATH,
+                FileSystemConnectorOptions.PARTITION_DEFAULT_NAME,
+                FileSystemConnectorOptions.PARTITION_TIME_EXTRACTOR_KIND,
+                FileSystemConnectorOptions.PARTITION_TIME_EXTRACTOR_CLASS,
+                FileSystemConnectorOptions.PARTITION_TIME_EXTRACTOR_TIMESTAMP_PATTERN,
+                FileSystemConnectorOptions.SINK_ROLLING_POLICY_FILE_SIZE,
+                FileSystemConnectorOptions.SINK_ROLLING_POLICY_ROLLOVER_INTERVAL,
+                FileSystemConnectorOptions.SINK_ROLLING_POLICY_CHECK_INTERVAL,
+                FileSystemConnectorOptions.SINK_SHUFFLE_BY_PARTITION,

Review comment:
       Shuffle by partition is a mistake, will revert it. But the rolling policy sounds safe as it's just tuning the object `TableRollingPolicy`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29347",
       "triggerID" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29368",
       "triggerID" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0d8e419c43838e6d85155feadee7e096e3990e0d Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29368) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] twalthr commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
twalthr commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r784049657



##########
File path: docs/content/docs/connectors/table/kinesis.md
##########
@@ -255,349 +271,399 @@ Connector Options
     <tr>
       <td><h5>scan.stream.initpos</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">LATEST</td>
       <td>String</td>
       <td>Initial position to be used when reading from the table. See <a href="#start-reading-position">Start Reading Position</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.initpos-timestamp</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The initial timestamp to start reading Kinesis stream from (when <code>scan.stream.initpos</code> is AT_TIMESTAMP). See <a href="#start-reading-position">Start Reading Position</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.initpos-timestamp-format</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">yyyy-MM-dd'T'HH:mm:ss.SSSXXX</td>
       <td>String</td>
       <td>The date format of initial timestamp to start reading Kinesis stream from (when <code>scan.stream.initpos</code> is AT_TIMESTAMP). See <a href="#start-reading-position">Start Reading Position</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.recordpublisher</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">POLLING</td>
       <td>String</td>
       <td>The <code>RecordPublisher</code> type to use for sources. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.consumername</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The name of the EFO consumer to register with KDS. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.registration</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">LAZY</td>
       <td>String</td>
       <td>Determine how and when consumer de-/registration is performed (LAZY|EAGER|NONE). See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.consumerarn</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The prefix of consumer ARN for a given stream. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.http-client.max-concurrency</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10000</td>
       <td>Integer</td>
       <td>Maximum number of allowed concurrent requests for the EFO client. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">50</td>
       <td>Integer</td>
       <td>The maximum number of <code>describeStream</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>describeStream</code> attempt (for consuming from DynamoDB streams).</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">5000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds)  between each <code>describeStream</code> attempt (for consuming from DynamoDB streams).</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>describeStream</code> attempt (for consuming from DynamoDB streams).</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>listShards</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>listShards</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">5000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>listShards</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>listShards</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">50</td>
       <td>Integer</td>
       <td>The maximum number of <code>describeStreamConsumer</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>describeStreamConsumer</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">5000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>describeStreamConsumer</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>describeStreamConsumer</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>registerStream</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.timeout</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">60</td>
       <td>Integer</td>
       <td>The maximum time in seconds to wait for a stream consumer to become active before giving up.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">500</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>registerStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>registerStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>registerStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>deregisterStream</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.timeout</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">60</td>
       <td>Integer</td>
       <td>The maximum time in seconds to wait for a stream consumer to deregister before giving up.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">500</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>deregisterStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>deregisterStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>deregisterStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>subscribeToShard</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>subscribeToShard</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>subscribeToShard</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>subscribeToShard</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.maxrecordcount</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10000</td>
       <td>Integer</td>
       <td>The maximum number of records to try to get each time we fetch records from a AWS Kinesis shard.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">3</td>
       <td>Integer</td>
       <td>The maximum number of <code>getRecords</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">300</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between <code>getRecords</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between <code>getRecords</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>getRecords</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.intervalmillis</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">200</td>
       <td>Long</td>
       <td>The interval (in milliseconds) between each <code>getRecords</code> request to a AWS Kinesis shard in milliseconds.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">3</td>
       <td>Integer</td>
       <td>The maximum number of <code>getShardIterator</code> attempts if we get ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">300</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between <code>getShardIterator</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between <code>getShardIterator</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>getShardIterator</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.discovery.intervalmillis</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10000</td>
       <td>Integer</td>
       <td>The interval between each attempt to discover new shards.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.adaptivereads</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">false</td>
       <td>Boolean</td>
       <td>The config to turn on adaptive reads from a shard. See the <code>AdaptivePollingRecordPublisher</code> documentation for details.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.idle.interval</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">-1</td>
       <td>Long</td>
       <td>The interval (in milliseconds) after which to consider a shard idle for purposes of watermark generation. A positive value will allow the watermark to progress even when some shards don't receive new records.</td>
     </tr>
     <tr>
       <td><h5>scan.watermark.sync.interval</h5></td>
       <td>optional</td>
+      <td>no</td>

Review comment:
       With the new dual representation of `mapType` I guess we could update the implementation and make it possible.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] twalthr commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
twalthr commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r784047688



##########
File path: docs/content/docs/connectors/table/kafka.md
##########
@@ -179,50 +179,57 @@ Connector Options
     <tr>
       <th class="text-left" style="width: 25%">Option</th>
       <th class="text-center" style="width: 8%">Required</th>
+      <th class="text-center" style="width: 8%">Forwarded</th>
       <th class="text-center" style="width: 7%">Default</th>
       <th class="text-center" style="width: 10%">Type</th>
-      <th class="text-center" style="width: 50%">Description</th>
+      <th class="text-center" style="width: 42%">Description</th>
     </tr>
     </thead>
     <tbody>
     <tr>
       <td><h5>connector</h5></td>
       <td>required</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Specify what connector to use, for Kafka use <code>'kafka'</code>.</td>
     </tr>
     <tr>
       <td><h5>topic</h5></td>
       <td>required for sink</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Topic name(s) to read data from when the table is used as source. It also supports topic list for source by separating topic by semicolon like <code>'topic-1;topic-2'</code>. Note, only one of "topic-pattern" and "topic" can be specified for sources. When the table is used as sink, the topic name is the topic to write data to. Note topic list is not supported for sinks.</td>
     </tr>
     <tr>
       <td><h5>topic-pattern</h5></td>
       <td>optional</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The regular expression for a pattern of topic names to read from. All topics with names that match the specified regular expression will be subscribed by the consumer when the job starts running. Note, only one of "topic-pattern" and "topic" can be specified for sources.</td>
     </tr>
     <tr>
       <td><h5>properties.bootstrap.servers</h5></td>
       <td>required</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Comma separated list of Kafka brokers.</td>
     </tr>
     <tr>
       <td><h5>properties.group.id</h5></td>
       <td>optional for source, not applicable for sink</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The id of the consumer group for Kafka source. If group ID is not specified, an automatically generated id "KafkaSource-{tableIdentifier}" will be used.</td>
     </tr>
     <tr>
       <td><h5>properties.*</h5></td>
       <td>optional</td>
+      <td>no</td>

Review comment:
       Internal implementation detail. The Kafka table factory needs an update to make it forwardable. Since Kafka is still our most important connector maybe we should do this asap @slinkydeveloper ?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 57b0ecb6e3ff211665124963a4e3f35a5cd8929b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059) 
   * f2603c0005990aa622277c635475d95ee2c049a7 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29347",
       "triggerID" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29368",
       "triggerID" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35ffc4d144540ac24d28499395079ff34d32f4eb",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29416",
       "triggerID" : "35ffc4d144540ac24d28499395079ff34d32f4eb",
       "triggerType" : "PUSH"
     }, {
       "hash" : "044804a54474295d18f075e5649e1f551171845a",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "044804a54474295d18f075e5649e1f551171845a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 35ffc4d144540ac24d28499395079ff34d32f4eb Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29416) 
   * 044804a54474295d18f075e5649e1f551171845a UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006715442


   Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress of the review.
   
   
   ## Automated Checks
   Last check on commit 57b0ecb6e3ff211665124963a4e3f35a5cd8929b (Thu Jan 06 16:14:31 UTC 2022)
   
   **Warnings:**
    * No documentation files were touched! Remember to keep the Flink docs up to date!
   
   
   <sub>Mention the bot in a comment to re-run the automated checks.</sub>
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process.<details>
    The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`)
    - `@flinkbot approve all` to approve all aspects
    - `@flinkbot approve-until architecture` to approve everything until `architecture`
    - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention
    - `@flinkbot disapprove architecture` to remove an approval you gave earlier
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] JingGe commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
JingGe commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r786978426



##########
File path: flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/HBase2DynamicTableFactory.java
##########
@@ -136,4 +133,20 @@ public String factoryIdentifier() {
         set.add(LOOKUP_MAX_RETRIES);
         return set;
     }
+
+    @Override
+    public Set<ConfigOption<?>> forwardOptions() {

Review comment:
       Does it make sense to adapt it in this PR too? Otherwise the `ConfigOptionsDocGenerator` will be broken and hard to track the reason in the future.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] JingGe commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
JingGe commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r788657052



##########
File path: flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/HBase2DynamicTableFactory.java
##########
@@ -136,4 +133,20 @@ public String factoryIdentifier() {
         set.add(LOOKUP_MAX_RETRIES);
         return set;
     }
+
+    @Override
+    public Set<ConfigOption<?>> forwardOptions() {

Review comment:
       Now I got your point. We had content-out-of-synch issue previously, at least for HBase and there is a [ticket](https://issues.apache.org/jira/browse/FLINK-25506) for doing it




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] twalthr closed pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
twalthr closed pull request #18290:
URL: https://github.com/apache/flink/pull/18290


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 57b0ecb6e3ff211665124963a4e3f35a5cd8929b UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29347",
       "triggerID" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 57b0ecb6e3ff211665124963a4e3f35a5cd8929b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059) 
   * f2603c0005990aa622277c635475d95ee2c049a7 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346) 
   * 8480af8298b5fa73b3a47dc97ebea031400e660a Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29347) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
slinkydeveloper commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r784640093



##########
File path: docs/content/docs/connectors/table/filesystem.md
##########
@@ -325,33 +347,43 @@ Time extractors define extracting time from partition values.
 <table class="table table-bordered">
   <thead>
     <tr>
-        <th class="text-left" style="width: 20%">Key</th>
-        <th class="text-left" style="width: 15%">Default</th>
+        <th class="text-left" style="width: 25%">Option</th>
+        <th class="text-left" style="width: 8%">Required</th>
+        <th class="text-left" style="width: 8%">Forwarded</th>
+        <th class="text-left" style="width: 7%">Default</th>
         <th class="text-left" style="width: 10%">Type</th>
-        <th class="text-left" style="width: 55%">Description</th>
+        <th class="text-left" style="width: 42%">Description</th>
     </tr>
   </thead>
   <tbody>
     <tr>
         <td><h5>partition.time-extractor.kind</h5></td>
+        <td>optional</td>
+        <td>yes</td>
         <td style="word-wrap: break-word;">default</td>
         <td>String</td>
         <td>Time extractor to extract time from partition values. Support default and custom. For default, can configure timestamp pattern\formatter. For custom, should configure extractor class.</td>
     </tr>
     <tr>
         <td><h5>partition.time-extractor.class</h5></td>
+        <td>optional</td>
+        <td>yes</td>

Review comment:
       They can change the configuration of the operator, but can they change its state?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
slinkydeveloper commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r784700282



##########
File path: docs/content/docs/connectors/table/filesystem.md
##########
@@ -325,33 +347,43 @@ Time extractors define extracting time from partition values.
 <table class="table table-bordered">
   <thead>
     <tr>
-        <th class="text-left" style="width: 20%">Key</th>
-        <th class="text-left" style="width: 15%">Default</th>
+        <th class="text-left" style="width: 25%">Option</th>
+        <th class="text-left" style="width: 8%">Required</th>
+        <th class="text-left" style="width: 8%">Forwarded</th>
+        <th class="text-left" style="width: 7%">Default</th>
         <th class="text-left" style="width: 10%">Type</th>
-        <th class="text-left" style="width: 55%">Description</th>
+        <th class="text-left" style="width: 42%">Description</th>
     </tr>
   </thead>
   <tbody>
     <tr>
         <td><h5>partition.time-extractor.kind</h5></td>
+        <td>optional</td>
+        <td>yes</td>
         <td style="word-wrap: break-word;">default</td>
         <td>String</td>
         <td>Time extractor to extract time from partition values. Support default and custom. For default, can configure timestamp pattern\formatter. For custom, should configure extractor class.</td>
     </tr>
     <tr>
         <td><h5>partition.time-extractor.class</h5></td>
+        <td>optional</td>
+        <td>yes</td>

Review comment:
       Removed, check now




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] twalthr commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
twalthr commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r784046672



##########
File path: docs/content/docs/connectors/table/formats/json.md
##########
@@ -69,29 +69,33 @@ Format Options
       <tr>
         <th class="text-left" style="width: 25%">Option</th>
         <th class="text-center" style="width: 8%">Required</th>
+        <th class="text-center" style="width: 8%">Forwarded</th>
         <th class="text-center" style="width: 7%">Default</th>
         <th class="text-center" style="width: 10%">Type</th>
-        <th class="text-center" style="width: 50%">Description</th>
+        <th class="text-center" style="width: 42%">Description</th>
       </tr>
     </thead>
     <tbody>
     <tr>
       <td><h5>format</h5></td>
       <td>required</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Specify what format to use, here should be <code>'json'</code>.</td>
     </tr>
     <tr>
       <td><h5>json.fail-on-missing-field</h5></td>
       <td>optional</td>
+      <td>no</td>

Review comment:
       isn't the logic of `ignore-parse-errors` and `fail-on-missing-field` similar? I also don't have a strong opinion, but both should be consistent.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] twalthr commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
twalthr commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r784663143



##########
File path: docs/content/docs/connectors/table/elasticsearch.md
##########
@@ -67,15 +67,17 @@ Connector Options
       <tr>
         <th class="text-left" style="width: 25%">Option</th>
         <th class="text-center" style="width: 8%">Required</th>
+        <th class="text-center" style="width: 8%">Forwarded</th>

Review comment:
       I still like the name proposed in the FLIP. How about `mutable`? I agree that the header should be a link later.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29347",
       "triggerID" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29368",
       "triggerID" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35ffc4d144540ac24d28499395079ff34d32f4eb",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "35ffc4d144540ac24d28499395079ff34d32f4eb",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0d8e419c43838e6d85155feadee7e096e3990e0d Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29368) 
   * 35ffc4d144540ac24d28499395079ff34d32f4eb UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
slinkydeveloper commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r783279489



##########
File path: flink-connectors/flink-connector-hbase-1.4/src/main/java/org/apache/flink/connector/hbase1/HBase1DynamicTableFactory.java
##########
@@ -65,16 +65,23 @@
     @Override
     public DynamicTableSource createDynamicTableSource(Context context) {
         TableFactoryHelper helper = createTableFactoryHelper(this, context);
+        helper.forwardOptions(
+                TABLE_NAME,
+                ZOOKEEPER_ZNODE_PARENT,
+                ZOOKEEPER_QUORUM,
+                NULL_STRING_LITERAL,
+                LOOKUP_ASYNC,

Review comment:
       Seems like it's an hbase 1 option as well, as it's used in `HBaseConnectorOptionsUtil#getHBaseLookupOptions`, used by the hbase 1 connector as well. Will add among optional options, as hbase 2




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] twalthr commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
twalthr commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r782015915



##########
File path: flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/HBase2DynamicTableFactory.java
##########
@@ -67,16 +66,23 @@
     @Override
     public DynamicTableSource createDynamicTableSource(Context context) {
         TableFactoryHelper helper = createTableFactoryHelper(this, context);
+        helper.forwardOptions(
+                TABLE_NAME,
+                ZOOKEEPER_ZNODE_PARENT,
+                ZOOKEEPER_QUORUM,
+                NULL_STRING_LITERAL,
+                LOOKUP_ASYNC,

Review comment:
       don't allow

##########
File path: flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/table/FileSystemTableFactory.java
##########
@@ -79,9 +86,27 @@ public DynamicTableSource createDynamicTableSource(Context context) {
     @Override
     public DynamicTableSink createDynamicTableSink(Context context) {
         FactoryUtil.TableFactoryHelper helper = FactoryUtil.createTableFactoryHelper(this, context);
+        helper.forwardOptions(
+                FileSystemConnectorOptions.PATH,
+                FileSystemConnectorOptions.PARTITION_DEFAULT_NAME,
+                FileSystemConnectorOptions.PARTITION_TIME_EXTRACTOR_KIND,
+                FileSystemConnectorOptions.PARTITION_TIME_EXTRACTOR_CLASS,
+                FileSystemConnectorOptions.PARTITION_TIME_EXTRACTOR_TIMESTAMP_PATTERN,
+                FileSystemConnectorOptions.SINK_ROLLING_POLICY_FILE_SIZE,
+                FileSystemConnectorOptions.SINK_ROLLING_POLICY_ROLLOVER_INTERVAL,
+                FileSystemConnectorOptions.SINK_ROLLING_POLICY_CHECK_INTERVAL,
+                FileSystemConnectorOptions.SINK_SHUFFLE_BY_PARTITION,

Review comment:
       this sounds dangerous. let's exclude everything except for PATH in the first version? Otherwise we should ask someone from the SDK team.

##########
File path: flink-connectors/flink-connector-hbase-1.4/src/main/java/org/apache/flink/connector/hbase1/HBase1DynamicTableFactory.java
##########
@@ -65,16 +65,23 @@
     @Override
     public DynamicTableSource createDynamicTableSource(Context context) {
         TableFactoryHelper helper = createTableFactoryHelper(this, context);
+        helper.forwardOptions(
+                TABLE_NAME,
+                ZOOKEEPER_ZNODE_PARENT,
+                ZOOKEEPER_QUORUM,
+                NULL_STRING_LITERAL,
+                LOOKUP_ASYNC,

Review comment:
       not an Hbase1 option? at least is is neither among optional and required options

##########
File path: flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/table/FileSystemTableSink.java
##########
@@ -118,19 +118,19 @@
 
     FileSystemTableSink(
             DynamicTableFactory.Context context,
+            ReadableConfig config,

Review comment:
       same comment as above. let's not pass the context but only what is needed by the sink.

##########
File path: flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/table/AbstractFileSystemTable.java
##########
@@ -43,11 +43,10 @@
 
     List<String> partitionKeys;
 
-    AbstractFileSystemTable(DynamicTableFactory.Context context) {
+    AbstractFileSystemTable(DynamicTableFactory.Context context, ReadableConfig config) {

Review comment:
       do we still need the `Context` as an argument? can we simply pass the objects that we need? I don't think it is bad to list what is actually used in the constructor.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] twalthr commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
twalthr commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r783903468



##########
File path: docs/content/docs/connectors/table/filesystem.md
##########
@@ -451,15 +491,19 @@ The parallelism of writing files into external file system (including Hive) can
 <table class="table table-bordered">
   <thead>
     <tr>
-        <th class="text-left" style="width: 20%">Key</th>
-        <th class="text-left" style="width: 15%">Default</th>
+        <th class="text-left" style="width: 25%">Option</th>
+        <th class="text-left" style="width: 8%">Required</th>
+        <th class="text-left" style="width: 8%">Forwarded</th>
+        <th class="text-left" style="width: 7%">Default</th>
         <th class="text-left" style="width: 10%">Type</th>
-        <th class="text-left" style="width: 55%">Description</th>
+        <th class="text-left" style="width: 42%">Description</th>
     </tr>
   </thead>
   <tbody>
     <tr>
         <td><h5>sink.parallelism</h5></td>
+        <td>optional</td>
+        <td>false</td>

Review comment:
       `no`
   
   side comment: We should generate those tables automatically soon.

##########
File path: docs/content/docs/connectors/table/formats/json.md
##########
@@ -69,29 +69,33 @@ Format Options
       <tr>
         <th class="text-left" style="width: 25%">Option</th>
         <th class="text-center" style="width: 8%">Required</th>
+        <th class="text-center" style="width: 8%">Forwarded</th>
         <th class="text-center" style="width: 7%">Default</th>
         <th class="text-center" style="width: 10%">Type</th>
-        <th class="text-center" style="width: 50%">Description</th>
+        <th class="text-center" style="width: 42%">Description</th>
       </tr>
     </thead>
     <tbody>
     <tr>
       <td><h5>format</h5></td>
       <td>required</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Specify what format to use, here should be <code>'json'</code>.</td>
     </tr>
     <tr>
       <td><h5>json.fail-on-missing-field</h5></td>
       <td>optional</td>
+      <td>no</td>

Review comment:
       why is this no but `ignore-parse-errors` is?

##########
File path: flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/connector/elasticsearch/table/ElasticsearchDynamicSinkFactoryBase.java
##########
@@ -224,6 +215,28 @@ static void validate(boolean condition, Supplier<String> message) {
                 .collect(Collectors.toSet());
     }
 
+    @Override
+    public Set<ConfigOption<?>> forwardOptions() {
+        return Stream.of(
+                        HOSTS_OPTION,
+                        INDEX_OPTION,
+                        PASSWORD_OPTION,
+                        USERNAME_OPTION,
+                        KEY_DELIMITER_OPTION,
+                        BULK_FLUSH_MAX_ACTIONS_OPTION,
+                        BULK_FLUSH_MAX_SIZE_OPTION,
+                        BULK_FLUSH_INTERVAL_OPTION,
+                        BULK_FLUSH_BACKOFF_TYPE_OPTION,
+                        BULK_FLUSH_BACKOFF_MAX_RETRIES_OPTION,
+                        BULK_FLUSH_BACKOFF_DELAY_OPTION,
+                        CONNECTION_PATH_PREFIX_OPTION,
+                        CONNECTION_REQUEST_TIMEOUT,
+                        CONNECTION_TIMEOUT,
+                        SOCKET_TIMEOUT,
+                        DELIVERY_GUARANTEE_OPTION)

Review comment:
       for Kafka I think we disable this?

##########
File path: flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/table/AbstractFileSystemTable.java
##########
@@ -43,16 +41,17 @@
 
     List<String> partitionKeys;
 
-    AbstractFileSystemTable(DynamicTableFactory.Context context) {
-        this.context = context;
-        this.tableIdentifier = context.getObjectIdentifier();
-        this.tableOptions = new Configuration();
-        context.getCatalogTable().getOptions().forEach(tableOptions::setString);
-        this.schema = context.getCatalogTable().getResolvedSchema();
+    AbstractFileSystemTable(
+            ObjectIdentifier tableIdentifier,
+            ResolvedSchema schema,
+            List<String> partitionKeys,
+            ReadableConfig config) {
+        this.tableIdentifier = tableIdentifier;
+        this.tableOptions = (Configuration) config;

Review comment:
       name it `tableOptions`

##########
File path: docs/content/docs/connectors/table/kinesis.md
##########
@@ -178,69 +184,79 @@ Connector Options
     <tr>
       <td><h5>aws.credentials.provider</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">AUTO</td>
       <td>String</td>
       <td>A credentials provider to use when authenticating against the Kinesis endpoint. See <a href="#authentication">Authentication</a> for details.</td>
     </tr>
     <tr>
 	  <td><h5>aws.credentials.basic.accesskeyid</h5></td>
 	  <td>optional</td>
+      <td>no</td>
 	  <td style="word-wrap: break-word;">(none)</td>
 	  <td>String</td>
 	  <td>The AWS access key ID to use when setting credentials provider type to BASIC.</td>
     </tr>
     <tr>
 	  <td><h5>aws.credentials.basic.secretkey</h5></td>
 	  <td>optional</td>
+      <td>no</td>
 	  <td style="word-wrap: break-word;">(none)</td>
 	  <td>String</td>
 	  <td>The AWS secret key to use when setting credentials provider type to BASIC.</td>
     </tr>
     <tr>
 	  <td><h5>aws.credentials.profile.path</h5></td>
 	  <td>optional</td>
+      <td>no</td>
 	  <td style="word-wrap: break-word;">(none)</td>
 	  <td>String</td>
 	  <td>Optional configuration for profile path if credential provider type is set to be PROFILE.</td>
     </tr>
     <tr>
 	  <td><h5>aws.credentials.profile.name</h5></td>
 	  <td>optional</td>
+      <td>no</td>
 	  <td style="word-wrap: break-word;">(none)</td>
 	  <td>String</td>
 	  <td>Optional configuration for profile name if credential provider type is set to be PROFILE.</td>
     </tr>
     <tr>
 	  <td><h5>aws.credentials.role.arn</h5></td>
 	  <td>optional</td>
+      <td>no</td>
 	  <td style="word-wrap: break-word;">(none)</td>
 	  <td>String</td>
 	  <td>The role ARN to use when credential provider type is set to ASSUME_ROLE or WEB_IDENTITY_TOKEN.</td>
     </tr>
     <tr>
 	  <td><h5>aws.credentials.role.sessionName</h5></td>
 	  <td>optional</td>
+      <td>no</td>
 	  <td style="word-wrap: break-word;">(none)</td>
 	  <td>String</td>
 	  <td>The role session name to use when credential provider type is set to ASSUME_ROLE or WEB_IDENTITY_TOKEN.</td>
     </tr>
     <tr>
 	  <td><h5>aws.credentials.role.externalId</h5></td>
 	  <td>optional</td>
+      <td>no</td>

Review comment:
       invalid formatting at various locations in this file

##########
File path: flink-formats/flink-avro-confluent-registry/src/main/java/org/apache/flink/formats/avro/registry/confluent/RegistryAvroFormatFactory.java
##########
@@ -175,6 +175,14 @@ public String factoryIdentifier() {
         return options;
     }
 
+    @Override
+    public Set<ConfigOption<?>> forwardOptions() {
+        Set<ConfigOption<?>> options = new HashSet<>();
+        options.addAll(requiredOptions());

Review comment:
       this looks dangerous to me if somebody adds an option above. wouldn't it be better to list them explicitly?

##########
File path: flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/table/FileSystemTableSink.java
##########
@@ -117,20 +118,22 @@
     @Nullable private Integer configuredParallelism;
 
     FileSystemTableSink(
-            DynamicTableFactory.Context context,
+            ObjectIdentifier tableIdentifier,
+            ResolvedSchema schema,
+            List<String> partitionKeys,
+            ReadableConfig config,

Review comment:
       `tableOptions`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] fapaul commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
fapaul commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r783923691



##########
File path: docs/content/docs/connectors/table/elasticsearch.md
##########
@@ -67,15 +67,17 @@ Connector Options
       <tr>
         <th class="text-left" style="width: 25%">Option</th>
         <th class="text-center" style="width: 8%">Required</th>
+        <th class="text-center" style="width: 8%">Forwarded</th>

Review comment:
       Can we come up with a better term than `forwarded` to make clear that the state becomes incompatible? `forwarded` is derived by the actual implementation but I do not think many users can relate the term.

##########
File path: docs/content/docs/connectors/table/kinesis.md
##########
@@ -136,34 +137,39 @@ Connector Options
     <tr>
       <td><h5>connector</h5></td>
       <td>required</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Specify what connector to use. For Kinesis use <code>'kinesis'</code>.</td>
     </tr>
     <tr>
       <td><h5>stream</h5></td>
       <td>required</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Name of the Kinesis data stream backing this table.</td>
     </tr>
     <tr>
       <td><h5>format</h5></td>
       <td>required</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The format used to deserialize and serialize Kinesis data stream records. See <a href="#data-type-mapping">Data Type Mapping</a> for details.</td>
     </tr>
     <tr>
       <td><h5>aws.region</h5></td>
       <td>optional</td>
+      <td>no</td>

Review comment:
       Why no?

##########
File path: docs/content/docs/connectors/table/hbase.md
##########
@@ -103,34 +105,39 @@ Connector Options
     <tr>
       <td><h5>table-name</h5></td>
       <td>required</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The name of HBase table to connect. By default, the table is in 'default' namespace. To assign the table a specified namespace you need to use 'namespace:table'.</td>
     </tr>
     <tr>
       <td><h5>zookeeper.quorum</h5></td>
       <td>required</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The HBase Zookeeper quorum.</td>
     </tr>
     <tr>
       <td><h5>zookeeper.znode.parent</h5></td>
       <td>optional</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">/hbase</td>
       <td>String</td>
       <td>The root dir in Zookeeper for HBase cluster.</td>
     </tr>
     <tr>
       <td><h5>null-string-literal</h5></td>
       <td>optional</td>
+      <td>yes</td>

Review comment:
       Can this make the state incompatible?

##########
File path: docs/content/docs/connectors/table/filesystem.md
##########
@@ -325,33 +347,43 @@ Time extractors define extracting time from partition values.
 <table class="table table-bordered">
   <thead>
     <tr>
-        <th class="text-left" style="width: 20%">Key</th>
-        <th class="text-left" style="width: 15%">Default</th>
+        <th class="text-left" style="width: 25%">Option</th>
+        <th class="text-left" style="width: 8%">Required</th>
+        <th class="text-left" style="width: 8%">Forwarded</th>
+        <th class="text-left" style="width: 7%">Default</th>
         <th class="text-left" style="width: 10%">Type</th>
-        <th class="text-left" style="width: 55%">Description</th>
+        <th class="text-left" style="width: 42%">Description</th>
     </tr>
   </thead>
   <tbody>
     <tr>
         <td><h5>partition.time-extractor.kind</h5></td>
+        <td>optional</td>
+        <td>yes</td>
         <td style="word-wrap: break-word;">default</td>
         <td>String</td>
         <td>Time extractor to extract time from partition values. Support default and custom. For default, can configure timestamp pattern\formatter. For custom, should configure extractor class.</td>
     </tr>
     <tr>
         <td><h5>partition.time-extractor.class</h5></td>
+        <td>optional</td>
+        <td>yes</td>

Review comment:
       All these partition configuratiosn sound like they change the transformation

##########
File path: docs/content/docs/connectors/table/kafka.md
##########
@@ -179,50 +179,57 @@ Connector Options
     <tr>
       <th class="text-left" style="width: 25%">Option</th>
       <th class="text-center" style="width: 8%">Required</th>
+      <th class="text-center" style="width: 8%">Forwarded</th>
       <th class="text-center" style="width: 7%">Default</th>
       <th class="text-center" style="width: 10%">Type</th>
-      <th class="text-center" style="width: 50%">Description</th>
+      <th class="text-center" style="width: 42%">Description</th>
     </tr>
     </thead>
     <tbody>
     <tr>
       <td><h5>connector</h5></td>
       <td>required</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Specify what connector to use, for Kafka use <code>'kafka'</code>.</td>
     </tr>
     <tr>
       <td><h5>topic</h5></td>
       <td>required for sink</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Topic name(s) to read data from when the table is used as source. It also supports topic list for source by separating topic by semicolon like <code>'topic-1;topic-2'</code>. Note, only one of "topic-pattern" and "topic" can be specified for sources. When the table is used as sink, the topic name is the topic to write data to. Note topic list is not supported for sinks.</td>
     </tr>
     <tr>
       <td><h5>topic-pattern</h5></td>
       <td>optional</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The regular expression for a pattern of topic names to read from. All topics with names that match the specified regular expression will be subscribed by the consumer when the job starts running. Note, only one of "topic-pattern" and "topic" can be specified for sources.</td>
     </tr>
     <tr>
       <td><h5>properties.bootstrap.servers</h5></td>
       <td>required</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Comma separated list of Kafka brokers.</td>
     </tr>
     <tr>
       <td><h5>properties.group.id</h5></td>
       <td>optional for source, not applicable for sink</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The id of the consumer group for Kafka source. If group ID is not specified, an automatically generated id "KafkaSource-{tableIdentifier}" will be used.</td>
     </tr>
     <tr>
       <td><h5>properties.*</h5></td>
       <td>optional</td>
+      <td>no</td>

Review comment:
       Why is this no?

##########
File path: docs/content/docs/connectors/table/kinesis.md
##########
@@ -255,349 +271,399 @@ Connector Options
     <tr>
       <td><h5>scan.stream.initpos</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">LATEST</td>
       <td>String</td>
       <td>Initial position to be used when reading from the table. See <a href="#start-reading-position">Start Reading Position</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.initpos-timestamp</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The initial timestamp to start reading Kinesis stream from (when <code>scan.stream.initpos</code> is AT_TIMESTAMP). See <a href="#start-reading-position">Start Reading Position</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.initpos-timestamp-format</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">yyyy-MM-dd'T'HH:mm:ss.SSSXXX</td>
       <td>String</td>
       <td>The date format of initial timestamp to start reading Kinesis stream from (when <code>scan.stream.initpos</code> is AT_TIMESTAMP). See <a href="#start-reading-position">Start Reading Position</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.recordpublisher</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">POLLING</td>
       <td>String</td>
       <td>The <code>RecordPublisher</code> type to use for sources. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.consumername</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The name of the EFO consumer to register with KDS. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.registration</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">LAZY</td>
       <td>String</td>
       <td>Determine how and when consumer de-/registration is performed (LAZY|EAGER|NONE). See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.consumerarn</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The prefix of consumer ARN for a given stream. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.efo.http-client.max-concurrency</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10000</td>
       <td>Integer</td>
       <td>Maximum number of allowed concurrent requests for the EFO client. See <a href="#enhanced-fan-out">Enhanced Fan-Out</a> for details.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">50</td>
       <td>Integer</td>
       <td>The maximum number of <code>describeStream</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>describeStream</code> attempt (for consuming from DynamoDB streams).</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">5000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds)  between each <code>describeStream</code> attempt (for consuming from DynamoDB streams).</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describe.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>describeStream</code> attempt (for consuming from DynamoDB streams).</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>listShards</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>listShards</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">5000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>listShards</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.list.shards.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>listShards</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">50</td>
       <td>Integer</td>
       <td>The maximum number of <code>describeStreamConsumer</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>describeStreamConsumer</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">5000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>describeStreamConsumer</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.describestreamconsumer.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>describeStreamConsumer</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>registerStream</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.timeout</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">60</td>
       <td>Integer</td>
       <td>The maximum time in seconds to wait for a stream consumer to become active before giving up.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">500</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>registerStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>registerStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.registerstreamconsumer.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>registerStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>deregisterStream</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.timeout</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">60</td>
       <td>Integer</td>
       <td>The maximum time in seconds to wait for a stream consumer to deregister before giving up.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">500</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>deregisterStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>deregisterStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.stream.deregisterstreamconsumer.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>deregisterStream</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10</td>
       <td>Integer</td>
       <td>The maximum number of <code>subscribeToShard</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between each <code>subscribeToShard</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">2000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between each <code>subscribeToShard</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.subscribetoshard.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>subscribeToShard</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.maxrecordcount</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10000</td>
       <td>Integer</td>
       <td>The maximum number of records to try to get each time we fetch records from a AWS Kinesis shard.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">3</td>
       <td>Integer</td>
       <td>The maximum number of <code>getRecords</code> attempts if we get a recoverable exception.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">300</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between <code>getRecords</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between <code>getRecords</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>getRecords</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getrecords.intervalmillis</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">200</td>
       <td>Long</td>
       <td>The interval (in milliseconds) between each <code>getRecords</code> request to a AWS Kinesis shard in milliseconds.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.maxretries</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">3</td>
       <td>Integer</td>
       <td>The maximum number of <code>getShardIterator</code> attempts if we get ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.backoff.base</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">300</td>
       <td>Long</td>
       <td>The base backoff time (in milliseconds) between <code>getShardIterator</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.backoff.max</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1000</td>
       <td>Long</td>
       <td>The maximum backoff time (in milliseconds) between <code>getShardIterator</code> attempts if we get a ProvisionedThroughputExceededException.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.getiterator.backoff.expconst</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">1.5</td>
       <td>Double</td>
       <td>The power constant for exponential backoff between each <code>getShardIterator</code> attempt.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.discovery.intervalmillis</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">10000</td>
       <td>Integer</td>
       <td>The interval between each attempt to discover new shards.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.adaptivereads</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">false</td>
       <td>Boolean</td>
       <td>The config to turn on adaptive reads from a shard. See the <code>AdaptivePollingRecordPublisher</code> documentation for details.</td>
     </tr>
     <tr>
       <td><h5>scan.shard.idle.interval</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">-1</td>
       <td>Long</td>
       <td>The interval (in milliseconds) after which to consider a shard idle for purposes of watermark generation. A positive value will allow the watermark to progress even when some shards don't receive new records.</td>
     </tr>
     <tr>
       <td><h5>scan.watermark.sync.interval</h5></td>
       <td>optional</td>
+      <td>no</td>

Review comment:
       I don't understand most of the nos for the kinesis connector.

##########
File path: docs/content/docs/connectors/table/kinesis.md
##########
@@ -136,34 +137,39 @@ Connector Options
     <tr>
       <td><h5>connector</h5></td>
       <td>required</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Specify what connector to use. For Kinesis use <code>'kinesis'</code>.</td>
     </tr>
     <tr>
       <td><h5>stream</h5></td>
       <td>required</td>
+      <td>yes</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>Name of the Kinesis data stream backing this table.</td>
     </tr>
     <tr>
       <td><h5>format</h5></td>
       <td>required</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The format used to deserialize and serialize Kinesis data stream records. See <a href="#data-type-mapping">Data Type Mapping</a> for details.</td>
     </tr>
     <tr>
       <td><h5>aws.region</h5></td>
       <td>optional</td>
+      <td>no</td>
       <td style="word-wrap: break-word;">(none)</td>
       <td>String</td>
       <td>The AWS region where the stream is defined. Either this or <code>aws.endpoint</code> are required.</td>
     </tr>
     <tr>
       <td><h5>aws.endpoint</h5></td>
       <td>optional</td>
+      <td>no</td>

Review comment:
       No?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29347",
       "triggerID" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29368",
       "triggerID" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 8480af8298b5fa73b3a47dc97ebea031400e660a Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29347) 
   * 0d8e419c43838e6d85155feadee7e096e3990e0d Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29368) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 57b0ecb6e3ff211665124963a4e3f35a5cd8929b Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] twalthr closed pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
twalthr closed pull request #18290:
URL: https://github.com/apache/flink/pull/18290


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
slinkydeveloper commented on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1019968434


   Opened these 3 issues:
   
   * https://issues.apache.org/jira/browse/FLINK-25777
   * https://issues.apache.org/jira/browse/FLINK-25778
   * https://issues.apache.org/jira/browse/FLINK-25779


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 57b0ecb6e3ff211665124963a4e3f35a5cd8929b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29347",
       "triggerID" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29368",
       "triggerID" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35ffc4d144540ac24d28499395079ff34d32f4eb",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29416",
       "triggerID" : "35ffc4d144540ac24d28499395079ff34d32f4eb",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 35ffc4d144540ac24d28499395079ff34d32f4eb Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29416) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 57b0ecb6e3ff211665124963a4e3f35a5cd8929b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059) 
   * f2603c0005990aa622277c635475d95ee2c049a7 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] JingGe commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
JingGe commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r787635789



##########
File path: flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/HBase2DynamicTableFactory.java
##########
@@ -136,4 +133,20 @@ public String factoryIdentifier() {
         set.add(LOOKUP_MAX_RETRIES);
         return set;
     }
+
+    @Override
+    public Set<ConfigOption<?>> forwardOptions() {

Review comment:
       Thanks for the info. In this case, this PR will break thing and it looks like fixing it will take long time. I would suggest letting @chesnay be aware of this. 
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29346",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29347",
       "triggerID" : "8480af8298b5fa73b3a47dc97ebea031400e660a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29368",
       "triggerID" : "0d8e419c43838e6d85155feadee7e096e3990e0d",
       "triggerType" : "PUSH"
     }, {
       "hash" : "35ffc4d144540ac24d28499395079ff34d32f4eb",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29416",
       "triggerID" : "35ffc4d144540ac24d28499395079ff34d32f4eb",
       "triggerType" : "PUSH"
     }, {
       "hash" : "044804a54474295d18f075e5649e1f551171845a",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30020",
       "triggerID" : "044804a54474295d18f075e5649e1f551171845a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 044804a54474295d18f075e5649e1f551171845a Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30020) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #18290:
URL: https://github.com/apache/flink/pull/18290#issuecomment-1006716199


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059",
       "triggerID" : "57b0ecb6e3ff211665124963a4e3f35a5cd8929b",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "f2603c0005990aa622277c635475d95ee2c049a7",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 57b0ecb6e3ff211665124963a4e3f35a5cd8929b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29059) 
   * f2603c0005990aa622277c635475d95ee2c049a7 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] slinkydeveloper commented on a change in pull request #18290: [FLINK-25391][connectors][formats] Update existing table factories/format factories for catalog table options forwarding

Posted by GitBox <gi...@apache.org>.
slinkydeveloper commented on a change in pull request #18290:
URL: https://github.com/apache/flink/pull/18290#discussion_r787442155



##########
File path: flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/HBase2DynamicTableFactory.java
##########
@@ -136,4 +133,20 @@ public String factoryIdentifier() {
         set.add(LOOKUP_MAX_RETRIES);
         return set;
     }
+
+    @Override
+    public Set<ConfigOption<?>> forwardOptions() {

Review comment:
       I think we should do it in another PR, as it's not trivial, because we also need to figure out how to include the generated docs within the docs engine




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org