You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2020/05/28 14:21:44 UTC

[GitHub] [flink] wuchong opened a new pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

wuchong opened a new pull request #12386:
URL: https://github.com/apache/flink/pull/12386


   <!--
   *Thank you very much for contributing to Apache Flink - we are happy that you want to help us improve Flink. To help the community review your contribution in the best possible way, please go through the checklist below, which will get the contribution into a shape in which it can be best reviewed.*
   
   *Please understand that we do not do this to make contributions to Flink a hassle. In order to uphold a high standard of quality for code contributions, while at the same time managing a large number of contributions, we need contributors to prepare the contributions well, and give reviewers enough contextual information for the review. Please also understand that contributions that do not follow this guide will take longer to review and thus typically be picked up with lower priority by the community.*
   
   ## Contribution Checklist
   
     - Make sure that the pull request corresponds to a [JIRA issue](https://issues.apache.org/jira/projects/FLINK/issues). Exceptions are made for typos in JavaDoc or documentation files, which need no JIRA issue.
     
     - Name the pull request in the form "[FLINK-XXXX] [component] Title of the pull request", where *FLINK-XXXX* should be replaced by the actual issue number. Skip *component* if you are unsure about which is the best component.
     Typo fixes that have no associated JIRA issue should be named following this pattern: `[hotfix] [docs] Fix typo in event time introduction` or `[hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator`.
   
     - Fill out the template below to describe the changes contributed by the pull request. That will give reviewers the context they need to do the review.
     
     - Make sure that the change passes the automated tests, i.e., `mvn clean verify` passes. You can set up Azure Pipelines CI to do that following [this guide](https://cwiki.apache.org/confluence/display/FLINK/Azure+Pipelines#AzurePipelines-Tutorial:SettingupAzurePipelinesforaforkoftheFlinkrepository).
   
     - Each pull request should address only one issue, not mix up code from multiple issues.
     
     - Each commit in the pull request has a meaningful commit message (including the JIRA id)
   
     - Once all items of the checklist are addressed, remove the above text and this checklist, leaving only the filled out template below.
   
   
   **(The sections below can be removed for hotfixes of typos)**
   -->
   
   ## What is the purpose of the change
   
   Redesign Table & SQL Connectors pages. Currently A lot of contents in https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connect.html#overview is out-dated. There are also many frictions on the Descriptor API and YAML file. I would propose to remove them in the new connector page, we should encourage users to use DDL for now. We can add them back once Descriptor API and YAML API is ready again.
   
   Added HBase connector documentation in this PR too.
   
   ## Brief change log
   
   - Rename HBase connector option 'zookeeper.znode-parent' to 'zookeeper.znode.parent'
    - 'zookeeper.znode.parent' configuration key is used in HBase, in order to be close to HBase users, it would be better to use the same key 'zookeeper.znode.parent'.
   - Add a new Overview page for the new Table & SQL connectors, not introduce Descriptor API and YAML API in it.
   - Hide the old Table & SQL connector page from sidebar, and add a Attention link in the new Overview page.
   - Add the HBase connector documentation as an example, the new connector documentation should include "Dependencies", "Example", "Options", "Data Type Mapping" sections.
   
   Here are some screenshots.
   
   ![image](https://user-images.githubusercontent.com/5378924/83153366-74025200-a131-11ea-9c5e-026979f4e899.png)
   
   ![image](https://user-images.githubusercontent.com/5378924/83153427-841a3180-a131-11ea-89ae-77f0812ee730.png)
   
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (yes / **no**)
     - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**)
     - The serializers: (yes / **no** / don't know)
     - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know)
     - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
     - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (yes / **no**)
     - If yes, how is the feature documented? (**not applicable** / docs / JavaDocs / not documented)
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
wuchong commented on pull request #12386:
URL: https://github.com/apache/flink/pull/12386#issuecomment-635380748


   cc @sjwiesman @twalthr 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on a change in pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
wuchong commented on a change in pull request #12386:
URL: https://github.com/apache/flink/pull/12386#discussion_r433601763



##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>
+<span class="label label-primary">Sink: Streaming Upsert Mode</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+The HBase connector allows for reading from and writing to an HBase cluster. This document describes how to setup the HBase Connector to run SQL queries against HBase.
+
+The connector can operate in upsert mode for exchange changelog messages with the external system using a primary key defined on the DDL. But the primary key can only be defined on the HBase rowkey field. If the PRIMARY KEY clause is not declared, the HBase connector will take rowkey as the primary key by default.
+
+<span class="label label-danger">Attention</span> HBase as a Lookup Source does not use any caching; data is always queried directly through the HBase client.
+
+Dependencies
+------------
+
+In order to setup the HBase connector, the following table provide dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.
+
+{% if site.is_stable %}
+
+| HBase Version       | Maven dependency                                          | SQL Client JAR         |
+| :------------------ | :-------------------------------------------------------- | :----------------------|
+| 1.4.x               | `flink-connector-hbase{{site.scala_version_suffix}}`     | [Download](https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-hbase{{site.scala_version_suffix}}/{{site.version}}/flink-connector-hbase{{site.scala_version_suffix}}-{{site.version}}.jar) |
+
+{% else %}
+
+The dependency table is only available for stable releases.
+
+{% endif %}
+
+How to create an HBase table
+----------------

Review comment:
       Why not `an` ? In my understanding, the `H` of `HBase` is a vowel.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on a change in pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
wuchong commented on a change in pull request #12386:
URL: https://github.com/apache/flink/pull/12386#discussion_r433606448



##########
File path: docs/dev/table/connectors/index.md
##########
@@ -0,0 +1,268 @@
+---
+title: "Table & SQL Connectors"
+nav-id: sql-connectors
+nav-parent_id: connectors-root
+nav-pos: 2
+nav-show_overview: true
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+Flink's Table API & SQL programs can be connected to other external systems for reading and writing both batch and streaming tables. A table source provides access to data which is stored in external systems (such as a database, key-value store, message queue, or file system). A table sink emits a table to an external storage system. Depending on the type of source and sink, they support different formats such as CSV, Avro, Parquet, or ORC.
+
+This page describes how to declare built-in table sources and/or table sinks and register them in Flink. After a source or sink has been registered, it can be accessed by Table API & SQL statements.
+
+<span class="label label-info">NOTE</span> If you want to implement your own *custom* table source or sink, have a look at the [user-defined sources & sinks page](sourceSinks.html).
+
+<span class="label label-danger">Attention</span> Flink Table & SQL introduces a new set of connector options since 1.11.0, if you are using the legacy connector options, please refer to the [legacy documentation]({{ site.baseurl }}/dev/table/connect.html).
+
+* This will be replaced by the TOC
+{:toc}
+
+Supported Connectors
+------------
+
+Flink natively support various connectors. The following tables list all available connectors.
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left">Name</th>
+        <th class="text-center">Version</th>
+        <th class="text-center">Source</th>
+        <th class="text-center">Sink</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td>Filesystem</td>
+      <td></td>
+      <td>Bounded and Unbounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Elasticsearch</td>
+      <td>6.x & 7.x</td>
+      <td>Not supported</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Apache Kafka</td>
+      <td>0.10+</td>
+      <td>Unbounded Scan</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>JDBC</td>
+      <td></td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td><a href="{{ site.baseurl }}/dev/table/connectors/hbase.html">Apache HBase</a></td>
+      <td>1.4.x</td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    </tbody>
+</table>
+
+{% top %}
+
+How to use connectors
+--------
+
+Flink supports to use SQL CREATE TABLE statement to register a table. One can define the name of the table, the schema of the table, the connector options for connecting to an external system.
+
+The following code shows a full example of how to connect to Kafka for reading Json records.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  -- declare the schema of the table
+  `user` BIGINT,
+  message STRING,
+  ts TIMESTAMP,
+  proctime AS PROCTIME(), -- use computed column to define proctime attribute
+  WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- use WATERMARK statement to define rowtime attribute
+) WITH (
+  -- declare the external system to connect to
+  'connector' = 'kafka',
+  'topic' = 'topic_name',
+  'scan.startup.mode' = 'earliest-offset',
+  'properties.bootstrap.servers' = 'localhost:9092',
+  'format' = 'json'   -- declare a format for this system
+)
+{% endhighlight %}
+</div>
+</div>
+
+In this ways the desired connection properties are converted into normalized, string-based key-value pairs. So-called [table factories](sourceSinks.html#define-a-tablefactory) create configured table sources, table sinks, and corresponding formats from the key-value pairs. All table factories that can be found via Java's [Service Provider Interfaces (SPI)](https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html) are taken into account when searching for exactly-one matching table factory.
+
+If no factory can be found or multiple factories match for the given properties, an exception will be thrown with additional information about considered factories and supported properties.
+
+{% top %}
+
+Schema Mapping
+------------
+
+The body clause of a SQL `CREATE TABLE` statement defines the names and types of columns, and constraints, watermarks. Flink doesn't hold the data, thus the schema definition only declares how to map types from an external system to Flink’s representation. The mapping may not be mapped by names, it depends on the implementation of formats and connectors. For example, a MySQL database table is mapped by field names (not case sensitive), and a CSV filesystem is mapped by field order (field names can be arbitrary). This will be explanation in every connectors.
+

Review comment:
       Regarding to `The mapping may not be mapped by name`, these words are mainly from the original [Table Schema section](https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connect.html#table-schema). I like the word "mapping" because it indicates that how Flink SQL shema maps to the original data store. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] danny0405 commented on a change in pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
danny0405 commented on a change in pull request #12386:
URL: https://github.com/apache/flink/pull/12386#discussion_r433589932



##########
File path: docs/dev/table/connectors/index.md
##########
@@ -0,0 +1,268 @@
+---
+title: "Table & SQL Connectors"
+nav-id: sql-connectors
+nav-parent_id: connectors-root
+nav-pos: 2
+nav-show_overview: true
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+Flink's Table API & SQL programs can be connected to other external systems for reading and writing both batch and streaming tables. A table source provides access to data which is stored in external systems (such as a database, key-value store, message queue, or file system). A table sink emits a table to an external storage system. Depending on the type of source and sink, they support different formats such as CSV, Avro, Parquet, or ORC.
+
+This page describes how to declare built-in table sources and/or table sinks and register them in Flink. After a source or sink has been registered, it can be accessed by Table API & SQL statements.
+
+<span class="label label-info">NOTE</span> If you want to implement your own *custom* table source or sink, have a look at the [user-defined sources & sinks page](sourceSinks.html).
+
+<span class="label label-danger">Attention</span> Flink Table & SQL introduces a new set of connector options since 1.11.0, if you are using the legacy connector options, please refer to the [legacy documentation]({{ site.baseurl }}/dev/table/connect.html).
+
+* This will be replaced by the TOC
+{:toc}
+
+Supported Connectors
+------------
+
+Flink natively support various connectors. The following tables list all available connectors.
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left">Name</th>
+        <th class="text-center">Version</th>
+        <th class="text-center">Source</th>
+        <th class="text-center">Sink</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td>Filesystem</td>
+      <td></td>
+      <td>Bounded and Unbounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Elasticsearch</td>
+      <td>6.x & 7.x</td>
+      <td>Not supported</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Apache Kafka</td>
+      <td>0.10+</td>
+      <td>Unbounded Scan</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>JDBC</td>
+      <td></td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td><a href="{{ site.baseurl }}/dev/table/connectors/hbase.html">Apache HBase</a></td>
+      <td>1.4.x</td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    </tbody>
+</table>
+
+{% top %}
+
+How to use connectors
+--------
+
+Flink supports to use SQL CREATE TABLE statement to register a table. One can define the name of the table, the schema of the table, the connector options for connecting to an external system.
+
+The following code shows a full example of how to connect to Kafka for reading Json records.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  -- declare the schema of the table
+  `user` BIGINT,
+  message STRING,
+  ts TIMESTAMP,
+  proctime AS PROCTIME(), -- use computed column to define proctime attribute
+  WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- use WATERMARK statement to define rowtime attribute
+) WITH (
+  -- declare the external system to connect to
+  'connector' = 'kafka',
+  'topic' = 'topic_name',
+  'scan.startup.mode' = 'earliest-offset',
+  'properties.bootstrap.servers' = 'localhost:9092',
+  'format' = 'json'   -- declare a format for this system
+)
+{% endhighlight %}
+</div>
+</div>
+
+In this ways the desired connection properties are converted into normalized, string-based key-value pairs. So-called [table factories](sourceSinks.html#define-a-tablefactory) create configured table sources, table sinks, and corresponding formats from the key-value pairs. All table factories that can be found via Java's [Service Provider Interfaces (SPI)](https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html) are taken into account when searching for exactly-one matching table factory.
+
+If no factory can be found or multiple factories match for the given properties, an exception will be thrown with additional information about considered factories and supported properties.
+
+{% top %}
+
+Schema Mapping
+------------
+
+The body clause of a SQL `CREATE TABLE` statement defines the names and types of columns, and constraints, watermarks. Flink doesn't hold the data, thus the schema definition only declares how to map types from an external system to Flink’s representation. The mapping may not be mapped by names, it depends on the implementation of formats and connectors. For example, a MySQL database table is mapped by field names (not case sensitive), and a CSV filesystem is mapped by field order (field names can be arbitrary). This will be explanation in every connectors.
+
+The following example shows a simple schema without time attributes and one-to-one field mapping of input/output to table columns.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyTable (
+  MyField1 INT,
+  MyField2 STRING,
+  MyField3 BOOLEAN
+) WITH (
+  ...
+)
+{% endhighlight %}
+</div>
+</div>
+
+### Primary Key
+
+Primary key constraints tell that a column or a set of columns of a table are unique and they do not contain null. Primary key therefore uniquely identify a row in a table.
+

Review comment:
       `contain null` -> `contain nulls`. Remove the therefore : `Primary key uniquely identifies`

##########
File path: docs/dev/table/connectors/index.md
##########
@@ -0,0 +1,268 @@
+---
+title: "Table & SQL Connectors"
+nav-id: sql-connectors
+nav-parent_id: connectors-root
+nav-pos: 2
+nav-show_overview: true
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+Flink's Table API & SQL programs can be connected to other external systems for reading and writing both batch and streaming tables. A table source provides access to data which is stored in external systems (such as a database, key-value store, message queue, or file system). A table sink emits a table to an external storage system. Depending on the type of source and sink, they support different formats such as CSV, Avro, Parquet, or ORC.
+
+This page describes how to declare built-in table sources and/or table sinks and register them in Flink. After a source or sink has been registered, it can be accessed by Table API & SQL statements.
+
+<span class="label label-info">NOTE</span> If you want to implement your own *custom* table source or sink, have a look at the [user-defined sources & sinks page](sourceSinks.html).
+
+<span class="label label-danger">Attention</span> Flink Table & SQL introduces a new set of connector options since 1.11.0, if you are using the legacy connector options, please refer to the [legacy documentation]({{ site.baseurl }}/dev/table/connect.html).
+
+* This will be replaced by the TOC
+{:toc}
+
+Supported Connectors
+------------
+
+Flink natively support various connectors. The following tables list all available connectors.
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left">Name</th>
+        <th class="text-center">Version</th>
+        <th class="text-center">Source</th>
+        <th class="text-center">Sink</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td>Filesystem</td>
+      <td></td>
+      <td>Bounded and Unbounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Elasticsearch</td>
+      <td>6.x & 7.x</td>
+      <td>Not supported</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Apache Kafka</td>
+      <td>0.10+</td>
+      <td>Unbounded Scan</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>JDBC</td>
+      <td></td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td><a href="{{ site.baseurl }}/dev/table/connectors/hbase.html">Apache HBase</a></td>
+      <td>1.4.x</td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    </tbody>
+</table>
+
+{% top %}
+
+How to use connectors
+--------
+
+Flink supports to use SQL CREATE TABLE statement to register a table. One can define the name of the table, the schema of the table, the connector options for connecting to an external system.
+
+The following code shows a full example of how to connect to Kafka for reading Json records.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  -- declare the schema of the table
+  `user` BIGINT,
+  message STRING,
+  ts TIMESTAMP,
+  proctime AS PROCTIME(), -- use computed column to define proctime attribute
+  WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- use WATERMARK statement to define rowtime attribute
+) WITH (
+  -- declare the external system to connect to
+  'connector' = 'kafka',
+  'topic' = 'topic_name',
+  'scan.startup.mode' = 'earliest-offset',
+  'properties.bootstrap.servers' = 'localhost:9092',
+  'format' = 'json'   -- declare a format for this system
+)
+{% endhighlight %}
+</div>
+</div>
+
+In this ways the desired connection properties are converted into normalized, string-based key-value pairs. So-called [table factories](sourceSinks.html#define-a-tablefactory) create configured table sources, table sinks, and corresponding formats from the key-value pairs. All table factories that can be found via Java's [Service Provider Interfaces (SPI)](https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html) are taken into account when searching for exactly-one matching table factory.
+
+If no factory can be found or multiple factories match for the given properties, an exception will be thrown with additional information about considered factories and supported properties.
+
+{% top %}
+
+Schema Mapping
+------------
+
+The body clause of a SQL `CREATE TABLE` statement defines the names and types of columns, and constraints, watermarks. Flink doesn't hold the data, thus the schema definition only declares how to map types from an external system to Flink’s representation. The mapping may not be mapped by names, it depends on the implementation of formats and connectors. For example, a MySQL database table is mapped by field names (not case sensitive), and a CSV filesystem is mapped by field order (field names can be arbitrary). This will be explanation in every connectors.
+

Review comment:
       `and constraints, watermarks` -> `constraints and watermarks`
   `The mapping may not be mapped by names` -> `The column names defined in the schema may or may not be the real physical table names`
   `be explanation` -> `be explained`

##########
File path: docs/dev/table/connectors/index.md
##########
@@ -0,0 +1,268 @@
+---
+title: "Table & SQL Connectors"
+nav-id: sql-connectors
+nav-parent_id: connectors-root
+nav-pos: 2
+nav-show_overview: true
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+Flink's Table API & SQL programs can be connected to other external systems for reading and writing both batch and streaming tables. A table source provides access to data which is stored in external systems (such as a database, key-value store, message queue, or file system). A table sink emits a table to an external storage system. Depending on the type of source and sink, they support different formats such as CSV, Avro, Parquet, or ORC.
+
+This page describes how to declare built-in table sources and/or table sinks and register them in Flink. After a source or sink has been registered, it can be accessed by Table API & SQL statements.
+
+<span class="label label-info">NOTE</span> If you want to implement your own *custom* table source or sink, have a look at the [user-defined sources & sinks page](sourceSinks.html).
+
+<span class="label label-danger">Attention</span> Flink Table & SQL introduces a new set of connector options since 1.11.0, if you are using the legacy connector options, please refer to the [legacy documentation]({{ site.baseurl }}/dev/table/connect.html).
+
+* This will be replaced by the TOC
+{:toc}
+
+Supported Connectors
+------------
+
+Flink natively support various connectors. The following tables list all available connectors.
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left">Name</th>
+        <th class="text-center">Version</th>
+        <th class="text-center">Source</th>
+        <th class="text-center">Sink</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td>Filesystem</td>
+      <td></td>
+      <td>Bounded and Unbounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Elasticsearch</td>
+      <td>6.x & 7.x</td>
+      <td>Not supported</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Apache Kafka</td>
+      <td>0.10+</td>
+      <td>Unbounded Scan</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>JDBC</td>
+      <td></td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td><a href="{{ site.baseurl }}/dev/table/connectors/hbase.html">Apache HBase</a></td>
+      <td>1.4.x</td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    </tbody>
+</table>
+
+{% top %}
+
+How to use connectors
+--------
+
+Flink supports to use SQL CREATE TABLE statement to register a table. One can define the name of the table, the schema of the table, the connector options for connecting to an external system.
+
+The following code shows a full example of how to connect to Kafka for reading Json records.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  -- declare the schema of the table
+  `user` BIGINT,
+  message STRING,
+  ts TIMESTAMP,
+  proctime AS PROCTIME(), -- use computed column to define proctime attribute
+  WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- use WATERMARK statement to define rowtime attribute
+) WITH (
+  -- declare the external system to connect to
+  'connector' = 'kafka',
+  'topic' = 'topic_name',
+  'scan.startup.mode' = 'earliest-offset',
+  'properties.bootstrap.servers' = 'localhost:9092',
+  'format' = 'json'   -- declare a format for this system
+)
+{% endhighlight %}
+</div>
+</div>
+
+In this ways the desired connection properties are converted into normalized, string-based key-value pairs. So-called [table factories](sourceSinks.html#define-a-tablefactory) create configured table sources, table sinks, and corresponding formats from the key-value pairs. All table factories that can be found via Java's [Service Provider Interfaces (SPI)](https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html) are taken into account when searching for exactly-one matching table factory.
+
+If no factory can be found or multiple factories match for the given properties, an exception will be thrown with additional information about considered factories and supported properties.
+
+{% top %}
+
+Schema Mapping
+------------
+
+The body clause of a SQL `CREATE TABLE` statement defines the names and types of columns, and constraints, watermarks. Flink doesn't hold the data, thus the schema definition only declares how to map types from an external system to Flink’s representation. The mapping may not be mapped by names, it depends on the implementation of formats and connectors. For example, a MySQL database table is mapped by field names (not case sensitive), and a CSV filesystem is mapped by field order (field names can be arbitrary). This will be explanation in every connectors.
+
+The following example shows a simple schema without time attributes and one-to-one field mapping of input/output to table columns.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyTable (
+  MyField1 INT,
+  MyField2 STRING,
+  MyField3 BOOLEAN
+) WITH (
+  ...
+)
+{% endhighlight %}
+</div>
+</div>
+
+### Primary Key
+
+Primary key constraints tell that a column or a set of columns of a table are unique and they do not contain null. Primary key therefore uniquely identify a row in a table.
+
+The primary key of a source table is a metadata information for optimization. The primary key of a sink table is usually used by the sink implementation for upserting.
+
+SQL standard specifies that a constraint can either be ENFORCED or NOT ENFORCED. This controls if the constraint checks are performed on the incoming/outgoing data. Flink does not own the data therefore the only mode we want to support is the NOT ENFORCED mode. Its up to the user to ensure that the query enforces key integrity.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyTable (
+  MyField1 INT,
+  MyField2 STRING,
+  MyField3 BOOLEAN,
+  PRIMARY KEY (MyField1, MyField2) NOT ENFORCED  -- defines a primary key on columns
+) WITH (
+  ...
+)
+{% endhighlight %}
+</div>
+</div>
+
+### Time Attributes
+
+Time attributes are essential when working with unbounded streaming tables. Therefore both processing-time and event-time (also known as "rowtime") attributes can be defined as part of the schema.
+
+For more information about time handling in Flink and especially event-time, we recommend the general [event-time section](streaming/time_attributes.html).
+
+#### Proctime Attributes
+
+In order to declare a proctime attribute in the schema, you can use Computed Column syntax to declare a computed column which is generated from `PROCTIME()` builtin function.
+The computed column is a virtual column which is not stored in the physical data.

Review comment:
       We should unify the `processing-time` and the `proctime` terms.

##########
File path: docs/dev/table/connectors/index.md
##########
@@ -0,0 +1,268 @@
+---
+title: "Table & SQL Connectors"
+nav-id: sql-connectors
+nav-parent_id: connectors-root
+nav-pos: 2
+nav-show_overview: true
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+Flink's Table API & SQL programs can be connected to other external systems for reading and writing both batch and streaming tables. A table source provides access to data which is stored in external systems (such as a database, key-value store, message queue, or file system). A table sink emits a table to an external storage system. Depending on the type of source and sink, they support different formats such as CSV, Avro, Parquet, or ORC.
+
+This page describes how to declare built-in table sources and/or table sinks and register them in Flink. After a source or sink has been registered, it can be accessed by Table API & SQL statements.
+

Review comment:
       `and/or` -> `and`, `and register them in Flink` -> `in Flink`.

##########
File path: docs/dev/table/connectors/index.md
##########
@@ -0,0 +1,268 @@
+---
+title: "Table & SQL Connectors"
+nav-id: sql-connectors
+nav-parent_id: connectors-root
+nav-pos: 2
+nav-show_overview: true
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+Flink's Table API & SQL programs can be connected to other external systems for reading and writing both batch and streaming tables. A table source provides access to data which is stored in external systems (such as a database, key-value store, message queue, or file system). A table sink emits a table to an external storage system. Depending on the type of source and sink, they support different formats such as CSV, Avro, Parquet, or ORC.
+
+This page describes how to declare built-in table sources and/or table sinks and register them in Flink. After a source or sink has been registered, it can be accessed by Table API & SQL statements.
+
+<span class="label label-info">NOTE</span> If you want to implement your own *custom* table source or sink, have a look at the [user-defined sources & sinks page](sourceSinks.html).
+
+<span class="label label-danger">Attention</span> Flink Table & SQL introduces a new set of connector options since 1.11.0, if you are using the legacy connector options, please refer to the [legacy documentation]({{ site.baseurl }}/dev/table/connect.html).
+
+* This will be replaced by the TOC
+{:toc}
+
+Supported Connectors
+------------
+
+Flink natively support various connectors. The following tables list all available connectors.
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left">Name</th>
+        <th class="text-center">Version</th>
+        <th class="text-center">Source</th>
+        <th class="text-center">Sink</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td>Filesystem</td>
+      <td></td>
+      <td>Bounded and Unbounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Elasticsearch</td>
+      <td>6.x & 7.x</td>
+      <td>Not supported</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Apache Kafka</td>
+      <td>0.10+</td>
+      <td>Unbounded Scan</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>JDBC</td>
+      <td></td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td><a href="{{ site.baseurl }}/dev/table/connectors/hbase.html">Apache HBase</a></td>
+      <td>1.4.x</td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    </tbody>
+</table>
+
+{% top %}
+
+How to use connectors
+--------
+
+Flink supports to use SQL CREATE TABLE statement to register a table. One can define the name of the table, the schema of the table, the connector options for connecting to an external system.
+
+The following code shows a full example of how to connect to Kafka for reading Json records.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  -- declare the schema of the table
+  `user` BIGINT,
+  message STRING,
+  ts TIMESTAMP,
+  proctime AS PROCTIME(), -- use computed column to define proctime attribute
+  WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- use WATERMARK statement to define rowtime attribute
+) WITH (
+  -- declare the external system to connect to
+  'connector' = 'kafka',
+  'topic' = 'topic_name',
+  'scan.startup.mode' = 'earliest-offset',
+  'properties.bootstrap.servers' = 'localhost:9092',
+  'format' = 'json'   -- declare a format for this system
+)
+{% endhighlight %}
+</div>
+</div>
+
+In this ways the desired connection properties are converted into normalized, string-based key-value pairs. So-called [table factories](sourceSinks.html#define-a-tablefactory) create configured table sources, table sinks, and corresponding formats from the key-value pairs. All table factories that can be found via Java's [Service Provider Interfaces (SPI)](https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html) are taken into account when searching for exactly-one matching table factory.
+

Review comment:
       What does the `normalized` mean ?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong closed pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
wuchong closed pull request #12386:
URL: https://github.com/apache/flink/pull/12386


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] danny0405 commented on a change in pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
danny0405 commented on a change in pull request #12386:
URL: https://github.com/apache/flink/pull/12386#discussion_r433575384



##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>
+<span class="label label-primary">Sink: Streaming Upsert Mode</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+The HBase connector allows for reading from and writing to an HBase cluster. This document describes how to setup the HBase Connector to run SQL queries against HBase.
+
+The connector can operate in upsert mode for exchange changelog messages with the external system using a primary key defined on the DDL. But the primary key can only be defined on the HBase rowkey field. If the PRIMARY KEY clause is not declared, the HBase connector will take rowkey as the primary key by default.
+

Review comment:
       Is `upsert mode` deprecated ?

##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>
+<span class="label label-primary">Sink: Streaming Upsert Mode</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+The HBase connector allows for reading from and writing to an HBase cluster. This document describes how to setup the HBase Connector to run SQL queries against HBase.
+
+The connector can operate in upsert mode for exchange changelog messages with the external system using a primary key defined on the DDL. But the primary key can only be defined on the HBase rowkey field. If the PRIMARY KEY clause is not declared, the HBase connector will take rowkey as the primary key by default.
+
+<span class="label label-danger">Attention</span> HBase as a Lookup Source does not use any caching; data is always queried directly through the HBase client.
+
+Dependencies
+------------
+
+In order to setup the HBase connector, the following table provide dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.
+
+{% if site.is_stable %}
+
+| HBase Version       | Maven dependency                                          | SQL Client JAR         |
+| :------------------ | :-------------------------------------------------------- | :----------------------|
+| 1.4.x               | `flink-connector-hbase{{site.scala_version_suffix}}`     | [Download](https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-hbase{{site.scala_version_suffix}}/{{site.version}}/flink-connector-hbase{{site.scala_version_suffix}}-{{site.version}}.jar) |
+
+{% else %}
+
+The dependency table is only available for stable releases.
+
+{% endif %}
+
+How to create an HBase table
+----------------
+
+All the column families in HBase table must be declared as ROW type, the field name maps to the column family name, and the nested field names map to the column qualifier names. There is no need to declare all the families and qualifiers in the schema, users can declare what’s necessary. Except the ROW type fields, the only one field of atomic type (e.g. STRING, BIGINT) will be recognized as HBase rowkey. The rowkey field can be arbitrary name.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE hTable (
+ rowkey INT,
+ family1 ROW<q1 INT>,
+ family2 ROW<q2 STRING, q3 BIGINT>,
+ family3 ROW<q4 DOUBLE, q5 BOOLEAN, q6 STRING>,
+ PRIMARY KEY (rowkey) NOT ENFORCED
+) WITH (
+ 'connector' = 'hbase-1.4',
+ 'table-name' = 'mytable',
+ 'zookeeper.quorum' = 'localhost:2121'
+)
+{% endhighlight %}
+</div>
+</div>
+
+Connector Options
+----------------
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left" style="width: 25%">Option</th>
+        <th class="text-center" style="width: 8%">Required</th>
+        <th class="text-center" style="width: 7%">Default</th>
+        <th class="text-center" style="width: 10%">Type</th>
+        <th class="text-center" style="width: 50%">Description</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td><h5>connector</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>Specify what connector to use, here should be 'hbase-1.4'.</td>
+    </tr>
+    <tr>
+      <td><h5>table-name</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>The name of HBase table to connect.</td>
+    </tr>
+    <tr>
+      <td><h5>zookeeper.quorum</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>The HBase Zookeeper quorum.</td>
+    </tr>
+    <tr>
+      <td><h5>zookeeper.znode.parent</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">/hbase</td>
+      <td>String</td>
+      <td>The root dir in Zookeeper for HBase cluster</td>
+    </tr>
+    <tr>
+      <td><h5>null-string-literal</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">null</td>
+      <td>String</td>
+      <td>Representation for null values for string fields. HBase source and sink encodes/decodes empty bytes as null values for all types except string type.</td>
+    </tr>
+    <tr>
+      <td><h5>sink.buffer-flush.max-size</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">2mb</td>
+      <td>MemorySize</td>
+      <td>Writing option, determines how many size in memory of buffered rows to insert per round trip.
+      This can improve performance for writing data to HBase database, but may increase the latency.

Review comment:
       `determines how many size in memory of buffered rows` -> `determines  the insert rows buffered memory size`

##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>
+<span class="label label-primary">Sink: Streaming Upsert Mode</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+The HBase connector allows for reading from and writing to an HBase cluster. This document describes how to setup the HBase Connector to run SQL queries against HBase.
+
+The connector can operate in upsert mode for exchange changelog messages with the external system using a primary key defined on the DDL. But the primary key can only be defined on the HBase rowkey field. If the PRIMARY KEY clause is not declared, the HBase connector will take rowkey as the primary key by default.
+
+<span class="label label-danger">Attention</span> HBase as a Lookup Source does not use any caching; data is always queried directly through the HBase client.
+
+Dependencies
+------------
+
+In order to setup the HBase connector, the following table provide dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.
+
+{% if site.is_stable %}
+
+| HBase Version       | Maven dependency                                          | SQL Client JAR         |
+| :------------------ | :-------------------------------------------------------- | :----------------------|
+| 1.4.x               | `flink-connector-hbase{{site.scala_version_suffix}}`     | [Download](https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-hbase{{site.scala_version_suffix}}/{{site.version}}/flink-connector-hbase{{site.scala_version_suffix}}-{{site.version}}.jar) |
+
+{% else %}
+
+The dependency table is only available for stable releases.
+
+{% endif %}
+
+How to create an HBase table
+----------------
+
+All the column families in HBase table must be declared as ROW type, the field name maps to the column family name, and the nested field names map to the column qualifier names. There is no need to declare all the families and qualifiers in the schema, users can declare what’s necessary. Except the ROW type fields, the only one field of atomic type (e.g. STRING, BIGINT) will be recognized as HBase rowkey. The rowkey field can be arbitrary name.
+

Review comment:
       `what’s necessary` -> `what’s used in the query`

##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>
+<span class="label label-primary">Sink: Streaming Upsert Mode</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+The HBase connector allows for reading from and writing to an HBase cluster. This document describes how to setup the HBase Connector to run SQL queries against HBase.
+
+The connector can operate in upsert mode for exchange changelog messages with the external system using a primary key defined on the DDL. But the primary key can only be defined on the HBase rowkey field. If the PRIMARY KEY clause is not declared, the HBase connector will take rowkey as the primary key by default.
+
+<span class="label label-danger">Attention</span> HBase as a Lookup Source does not use any caching; data is always queried directly through the HBase client.
+
+Dependencies
+------------
+
+In order to setup the HBase connector, the following table provide dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.
+
+{% if site.is_stable %}
+
+| HBase Version       | Maven dependency                                          | SQL Client JAR         |
+| :------------------ | :-------------------------------------------------------- | :----------------------|
+| 1.4.x               | `flink-connector-hbase{{site.scala_version_suffix}}`     | [Download](https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-hbase{{site.scala_version_suffix}}/{{site.version}}/flink-connector-hbase{{site.scala_version_suffix}}-{{site.version}}.jar) |
+
+{% else %}
+
+The dependency table is only available for stable releases.
+
+{% endif %}
+
+How to create an HBase table
+----------------
+
+All the column families in HBase table must be declared as ROW type, the field name maps to the column family name, and the nested field names map to the column qualifier names. There is no need to declare all the families and qualifiers in the schema, users can declare what’s necessary. Except the ROW type fields, the only one field of atomic type (e.g. STRING, BIGINT) will be recognized as HBase rowkey. The rowkey field can be arbitrary name.
+

Review comment:
       `the only one` -> `the single atomic type field`.
   `can be arbitrary name` -> `can be arbitrary non reserved-keyword name `

##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>
+<span class="label label-primary">Sink: Streaming Upsert Mode</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+The HBase connector allows for reading from and writing to an HBase cluster. This document describes how to setup the HBase Connector to run SQL queries against HBase.
+
+The connector can operate in upsert mode for exchange changelog messages with the external system using a primary key defined on the DDL. But the primary key can only be defined on the HBase rowkey field. If the PRIMARY KEY clause is not declared, the HBase connector will take rowkey as the primary key by default.
+
+<span class="label label-danger">Attention</span> HBase as a Lookup Source does not use any caching; data is always queried directly through the HBase client.
+
+Dependencies
+------------
+
+In order to setup the HBase connector, the following table provide dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.
+
+{% if site.is_stable %}
+
+| HBase Version       | Maven dependency                                          | SQL Client JAR         |
+| :------------------ | :-------------------------------------------------------- | :----------------------|
+| 1.4.x               | `flink-connector-hbase{{site.scala_version_suffix}}`     | [Download](https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-hbase{{site.scala_version_suffix}}/{{site.version}}/flink-connector-hbase{{site.scala_version_suffix}}-{{site.version}}.jar) |
+
+{% else %}
+
+The dependency table is only available for stable releases.
+
+{% endif %}
+
+How to create an HBase table
+----------------
+
+All the column families in HBase table must be declared as ROW type, the field name maps to the column family name, and the nested field names map to the column qualifier names. There is no need to declare all the families and qualifiers in the schema, users can declare what’s necessary. Except the ROW type fields, the only one field of atomic type (e.g. STRING, BIGINT) will be recognized as HBase rowkey. The rowkey field can be arbitrary name.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE hTable (
+ rowkey INT,
+ family1 ROW<q1 INT>,
+ family2 ROW<q2 STRING, q3 BIGINT>,
+ family3 ROW<q4 DOUBLE, q5 BOOLEAN, q6 STRING>,
+ PRIMARY KEY (rowkey) NOT ENFORCED
+) WITH (
+ 'connector' = 'hbase-1.4',
+ 'table-name' = 'mytable',
+ 'zookeeper.quorum' = 'localhost:2121'
+)
+{% endhighlight %}
+</div>
+</div>
+
+Connector Options
+----------------
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left" style="width: 25%">Option</th>
+        <th class="text-center" style="width: 8%">Required</th>
+        <th class="text-center" style="width: 7%">Default</th>
+        <th class="text-center" style="width: 10%">Type</th>
+        <th class="text-center" style="width: 50%">Description</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td><h5>connector</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>Specify what connector to use, here should be 'hbase-1.4'.</td>
+    </tr>
+    <tr>
+      <td><h5>table-name</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>The name of HBase table to connect.</td>
+    </tr>
+    <tr>
+      <td><h5>zookeeper.quorum</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>The HBase Zookeeper quorum.</td>
+    </tr>
+    <tr>
+      <td><h5>zookeeper.znode.parent</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">/hbase</td>
+      <td>String</td>
+      <td>The root dir in Zookeeper for HBase cluster</td>
+    </tr>
+    <tr>
+      <td><h5>null-string-literal</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">null</td>
+      <td>String</td>
+      <td>Representation for null values for string fields. HBase source and sink encodes/decodes empty bytes as null values for all types except string type.</td>
+    </tr>
+    <tr>
+      <td><h5>sink.buffer-flush.max-size</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">2mb</td>
+      <td>MemorySize</td>
+      <td>Writing option, determines how many size in memory of buffered rows to insert per round trip.
+      This can improve performance for writing data to HBase database, but may increase the latency.
+      </td>
+    </tr>
+    <tr>
+      <td><h5>sink.buffer-flush.max-rows</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>Integer</td>
+      <td>Writing option, determines how many number of rows to insert per round trip.
+      This can improve performance for writing data to HBase database, but may increase the latency.

Review comment:
       determines the number of rows

##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>
+<span class="label label-primary">Sink: Streaming Upsert Mode</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+The HBase connector allows for reading from and writing to an HBase cluster. This document describes how to setup the HBase Connector to run SQL queries against HBase.
+
+The connector can operate in upsert mode for exchange changelog messages with the external system using a primary key defined on the DDL. But the primary key can only be defined on the HBase rowkey field. If the PRIMARY KEY clause is not declared, the HBase connector will take rowkey as the primary key by default.
+
+<span class="label label-danger">Attention</span> HBase as a Lookup Source does not use any caching; data is always queried directly through the HBase client.
+
+Dependencies
+------------
+
+In order to setup the HBase connector, the following table provide dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.
+
+{% if site.is_stable %}
+
+| HBase Version       | Maven dependency                                          | SQL Client JAR         |
+| :------------------ | :-------------------------------------------------------- | :----------------------|
+| 1.4.x               | `flink-connector-hbase{{site.scala_version_suffix}}`     | [Download](https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-hbase{{site.scala_version_suffix}}/{{site.version}}/flink-connector-hbase{{site.scala_version_suffix}}-{{site.version}}.jar) |
+
+{% else %}
+
+The dependency table is only available for stable releases.
+
+{% endif %}
+
+How to create an HBase table
+----------------
+
+All the column families in HBase table must be declared as ROW type, the field name maps to the column family name, and the nested field names map to the column qualifier names. There is no need to declare all the families and qualifiers in the schema, users can declare what’s necessary. Except the ROW type fields, the only one field of atomic type (e.g. STRING, BIGINT) will be recognized as HBase rowkey. The rowkey field can be arbitrary name.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE hTable (
+ rowkey INT,
+ family1 ROW<q1 INT>,
+ family2 ROW<q2 STRING, q3 BIGINT>,
+ family3 ROW<q4 DOUBLE, q5 BOOLEAN, q6 STRING>,
+ PRIMARY KEY (rowkey) NOT ENFORCED
+) WITH (
+ 'connector' = 'hbase-1.4',
+ 'table-name' = 'mytable',
+ 'zookeeper.quorum' = 'localhost:2121'
+)
+{% endhighlight %}
+</div>
+</div>
+
+Connector Options
+----------------
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left" style="width: 25%">Option</th>
+        <th class="text-center" style="width: 8%">Required</th>
+        <th class="text-center" style="width: 7%">Default</th>
+        <th class="text-center" style="width: 10%">Type</th>
+        <th class="text-center" style="width: 50%">Description</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td><h5>connector</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>Specify what connector to use, here should be 'hbase-1.4'.</td>
+    </tr>
+    <tr>
+      <td><h5>table-name</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>The name of HBase table to connect.</td>
+    </tr>
+    <tr>
+      <td><h5>zookeeper.quorum</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>The HBase Zookeeper quorum.</td>
+    </tr>
+    <tr>
+      <td><h5>zookeeper.znode.parent</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">/hbase</td>
+      <td>String</td>
+      <td>The root dir in Zookeeper for HBase cluster</td>
+    </tr>
+    <tr>
+      <td><h5>null-string-literal</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">null</td>
+      <td>String</td>
+      <td>Representation for null values for string fields. HBase source and sink encodes/decodes empty bytes as null values for all types except string type.</td>
+    </tr>
+    <tr>
+      <td><h5>sink.buffer-flush.max-size</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">2mb</td>
+      <td>MemorySize</td>
+      <td>Writing option, determines how many size in memory of buffered rows to insert per round trip.
+      This can improve performance for writing data to HBase database, but may increase the latency.
+      </td>
+    </tr>
+    <tr>
+      <td><h5>sink.buffer-flush.max-rows</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>Integer</td>
+      <td>Writing option, determines how many number of rows to insert per round trip.
+      This can improve performance for writing data to HBase database, but may increase the latency.
+      No default value, which means the default flushing is not depends on the number of buffered rows
+      </td>
+    </tr>
+    <tr>
+      <td><h5>sink.buffer-flush.interval</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>Duration</td>
+      <td>Writing option, sets a flush interval flushing buffered requesting if the interval passes, in milliseconds.
+      No default value, which means no asynchronous flush thread will be scheduled.

Review comment:
       `Writing option, decides the interval to flush requests regularly in milliseconds`.

##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>
+<span class="label label-primary">Sink: Streaming Upsert Mode</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+The HBase connector allows for reading from and writing to an HBase cluster. This document describes how to setup the HBase Connector to run SQL queries against HBase.
+
+The connector can operate in upsert mode for exchange changelog messages with the external system using a primary key defined on the DDL. But the primary key can only be defined on the HBase rowkey field. If the PRIMARY KEY clause is not declared, the HBase connector will take rowkey as the primary key by default.
+
+<span class="label label-danger">Attention</span> HBase as a Lookup Source does not use any caching; data is always queried directly through the HBase client.
+

Review comment:
       caching -> cache.

##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>

Review comment:
       `Lookup` or `Lookupable` , our code use the latter.

##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>
+<span class="label label-primary">Sink: Streaming Upsert Mode</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+The HBase connector allows for reading from and writing to an HBase cluster. This document describes how to setup the HBase Connector to run SQL queries against HBase.
+
+The connector can operate in upsert mode for exchange changelog messages with the external system using a primary key defined on the DDL. But the primary key can only be defined on the HBase rowkey field. If the PRIMARY KEY clause is not declared, the HBase connector will take rowkey as the primary key by default.
+
+<span class="label label-danger">Attention</span> HBase as a Lookup Source does not use any caching; data is always queried directly through the HBase client.
+
+Dependencies
+------------
+
+In order to setup the HBase connector, the following table provide dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.
+
+{% if site.is_stable %}
+
+| HBase Version       | Maven dependency                                          | SQL Client JAR         |
+| :------------------ | :-------------------------------------------------------- | :----------------------|
+| 1.4.x               | `flink-connector-hbase{{site.scala_version_suffix}}`     | [Download](https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-hbase{{site.scala_version_suffix}}/{{site.version}}/flink-connector-hbase{{site.scala_version_suffix}}-{{site.version}}.jar) |
+
+{% else %}
+
+The dependency table is only available for stable releases.
+
+{% endif %}
+
+How to create an HBase table
+----------------

Review comment:
       an -> a




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12386:
URL: https://github.com/apache/flink/pull/12386#issuecomment-635398532


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "590be0e43b1f9e656fea887e8ef15c3aae07c55e",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2384",
       "triggerID" : "590be0e43b1f9e656fea887e8ef15c3aae07c55e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4068c3b38f5b7aff860b599df18e7058d75a5c86",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "4068c3b38f5b7aff860b599df18e7058d75a5c86",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 590be0e43b1f9e656fea887e8ef15c3aae07c55e Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2384) 
   * 4068c3b38f5b7aff860b599df18e7058d75a5c86 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12386:
URL: https://github.com/apache/flink/pull/12386#issuecomment-635398532


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "590be0e43b1f9e656fea887e8ef15c3aae07c55e",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2384",
       "triggerID" : "590be0e43b1f9e656fea887e8ef15c3aae07c55e",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 590be0e43b1f9e656fea887e8ef15c3aae07c55e Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2384) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on a change in pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
wuchong commented on a change in pull request #12386:
URL: https://github.com/apache/flink/pull/12386#discussion_r433605237



##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>
+<span class="label label-primary">Sink: Streaming Upsert Mode</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+The HBase connector allows for reading from and writing to an HBase cluster. This document describes how to setup the HBase Connector to run SQL queries against HBase.
+
+The connector can operate in upsert mode for exchange changelog messages with the external system using a primary key defined on the DDL. But the primary key can only be defined on the HBase rowkey field. If the PRIMARY KEY clause is not declared, the HBase connector will take rowkey as the primary key by default.
+
+<span class="label label-danger">Attention</span> HBase as a Lookup Source does not use any caching; data is always queried directly through the HBase client.
+
+Dependencies
+------------
+
+In order to setup the HBase connector, the following table provide dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.
+
+{% if site.is_stable %}
+
+| HBase Version       | Maven dependency                                          | SQL Client JAR         |
+| :------------------ | :-------------------------------------------------------- | :----------------------|
+| 1.4.x               | `flink-connector-hbase{{site.scala_version_suffix}}`     | [Download](https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-hbase{{site.scala_version_suffix}}/{{site.version}}/flink-connector-hbase{{site.scala_version_suffix}}-{{site.version}}.jar) |
+
+{% else %}
+
+The dependency table is only available for stable releases.
+
+{% endif %}
+
+How to create an HBase table
+----------------
+
+All the column families in HBase table must be declared as ROW type, the field name maps to the column family name, and the nested field names map to the column qualifier names. There is no need to declare all the families and qualifiers in the schema, users can declare what’s necessary. Except the ROW type fields, the only one field of atomic type (e.g. STRING, BIGINT) will be recognized as HBase rowkey. The rowkey field can be arbitrary name.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE hTable (
+ rowkey INT,
+ family1 ROW<q1 INT>,
+ family2 ROW<q2 STRING, q3 BIGINT>,
+ family3 ROW<q4 DOUBLE, q5 BOOLEAN, q6 STRING>,
+ PRIMARY KEY (rowkey) NOT ENFORCED
+) WITH (
+ 'connector' = 'hbase-1.4',
+ 'table-name' = 'mytable',
+ 'zookeeper.quorum' = 'localhost:2121'
+)
+{% endhighlight %}
+</div>
+</div>
+
+Connector Options
+----------------
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left" style="width: 25%">Option</th>
+        <th class="text-center" style="width: 8%">Required</th>
+        <th class="text-center" style="width: 7%">Default</th>
+        <th class="text-center" style="width: 10%">Type</th>
+        <th class="text-center" style="width: 50%">Description</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td><h5>connector</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>Specify what connector to use, here should be 'hbase-1.4'.</td>
+    </tr>
+    <tr>
+      <td><h5>table-name</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>The name of HBase table to connect.</td>
+    </tr>
+    <tr>
+      <td><h5>zookeeper.quorum</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>The HBase Zookeeper quorum.</td>
+    </tr>
+    <tr>
+      <td><h5>zookeeper.znode.parent</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">/hbase</td>
+      <td>String</td>
+      <td>The root dir in Zookeeper for HBase cluster</td>
+    </tr>
+    <tr>
+      <td><h5>null-string-literal</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">null</td>
+      <td>String</td>
+      <td>Representation for null values for string fields. HBase source and sink encodes/decodes empty bytes as null values for all types except string type.</td>
+    </tr>
+    <tr>
+      <td><h5>sink.buffer-flush.max-size</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">2mb</td>
+      <td>MemorySize</td>
+      <td>Writing option, determines how many size in memory of buffered rows to insert per round trip.
+      This can improve performance for writing data to HBase database, but may increase the latency.
+      </td>
+    </tr>
+    <tr>
+      <td><h5>sink.buffer-flush.max-rows</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>Integer</td>
+      <td>Writing option, determines how many number of rows to insert per round trip.
+      This can improve performance for writing data to HBase database, but may increase the latency.
+      No default value, which means the default flushing is not depends on the number of buffered rows
+      </td>
+    </tr>
+    <tr>
+      <td><h5>sink.buffer-flush.interval</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>Duration</td>
+      <td>Writing option, sets a flush interval flushing buffered requesting if the interval passes, in milliseconds.
+      No default value, which means no asynchronous flush thread will be scheduled.

Review comment:
       I changed this to `the interval to flush buffered rows.`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12386:
URL: https://github.com/apache/flink/pull/12386#issuecomment-635398532


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "590be0e43b1f9e656fea887e8ef15c3aae07c55e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2384",
       "triggerID" : "590be0e43b1f9e656fea887e8ef15c3aae07c55e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4068c3b38f5b7aff860b599df18e7058d75a5c86",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2533",
       "triggerID" : "4068c3b38f5b7aff860b599df18e7058d75a5c86",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d2344ec6fed515e2eecafa9d8d593af8f3d8d3b4",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d2344ec6fed515e2eecafa9d8d593af8f3d8d3b4",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 4068c3b38f5b7aff860b599df18e7058d75a5c86 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2533) 
   * d2344ec6fed515e2eecafa9d8d593af8f3d8d3b4 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on a change in pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
wuchong commented on a change in pull request #12386:
URL: https://github.com/apache/flink/pull/12386#discussion_r433601187



##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>
+<span class="label label-primary">Sink: Streaming Upsert Mode</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+The HBase connector allows for reading from and writing to an HBase cluster. This document describes how to setup the HBase Connector to run SQL queries against HBase.
+
+The connector can operate in upsert mode for exchange changelog messages with the external system using a primary key defined on the DDL. But the primary key can only be defined on the HBase rowkey field. If the PRIMARY KEY clause is not declared, the HBase connector will take rowkey as the primary key by default.
+

Review comment:
       I think `upsert mode` is not an API, but a general terminology which indicates it is updating by key. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12386:
URL: https://github.com/apache/flink/pull/12386#issuecomment-635398532


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "590be0e43b1f9e656fea887e8ef15c3aae07c55e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2384",
       "triggerID" : "590be0e43b1f9e656fea887e8ef15c3aae07c55e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4068c3b38f5b7aff860b599df18e7058d75a5c86",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2533",
       "triggerID" : "4068c3b38f5b7aff860b599df18e7058d75a5c86",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d2344ec6fed515e2eecafa9d8d593af8f3d8d3b4",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2604",
       "triggerID" : "d2344ec6fed515e2eecafa9d8d593af8f3d8d3b4",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 4068c3b38f5b7aff860b599df18e7058d75a5c86 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2533) 
   * d2344ec6fed515e2eecafa9d8d593af8f3d8d3b4 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2604) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on a change in pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
wuchong commented on a change in pull request #12386:
URL: https://github.com/apache/flink/pull/12386#discussion_r433604088



##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>
+<span class="label label-primary">Sink: Streaming Upsert Mode</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+The HBase connector allows for reading from and writing to an HBase cluster. This document describes how to setup the HBase Connector to run SQL queries against HBase.
+
+The connector can operate in upsert mode for exchange changelog messages with the external system using a primary key defined on the DDL. But the primary key can only be defined on the HBase rowkey field. If the PRIMARY KEY clause is not declared, the HBase connector will take rowkey as the primary key by default.
+
+<span class="label label-danger">Attention</span> HBase as a Lookup Source does not use any caching; data is always queried directly through the HBase client.
+
+Dependencies
+------------
+
+In order to setup the HBase connector, the following table provide dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.
+
+{% if site.is_stable %}
+
+| HBase Version       | Maven dependency                                          | SQL Client JAR         |
+| :------------------ | :-------------------------------------------------------- | :----------------------|
+| 1.4.x               | `flink-connector-hbase{{site.scala_version_suffix}}`     | [Download](https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-hbase{{site.scala_version_suffix}}/{{site.version}}/flink-connector-hbase{{site.scala_version_suffix}}-{{site.version}}.jar) |
+
+{% else %}
+
+The dependency table is only available for stable releases.
+
+{% endif %}
+
+How to create an HBase table
+----------------
+
+All the column families in HBase table must be declared as ROW type, the field name maps to the column family name, and the nested field names map to the column qualifier names. There is no need to declare all the families and qualifiers in the schema, users can declare what’s necessary. Except the ROW type fields, the only one field of atomic type (e.g. STRING, BIGINT) will be recognized as HBase rowkey. The rowkey field can be arbitrary name.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE hTable (
+ rowkey INT,
+ family1 ROW<q1 INT>,
+ family2 ROW<q2 STRING, q3 BIGINT>,
+ family3 ROW<q4 DOUBLE, q5 BOOLEAN, q6 STRING>,
+ PRIMARY KEY (rowkey) NOT ENFORCED
+) WITH (
+ 'connector' = 'hbase-1.4',
+ 'table-name' = 'mytable',
+ 'zookeeper.quorum' = 'localhost:2121'
+)
+{% endhighlight %}
+</div>
+</div>
+
+Connector Options
+----------------
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left" style="width: 25%">Option</th>
+        <th class="text-center" style="width: 8%">Required</th>
+        <th class="text-center" style="width: 7%">Default</th>
+        <th class="text-center" style="width: 10%">Type</th>
+        <th class="text-center" style="width: 50%">Description</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td><h5>connector</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>Specify what connector to use, here should be 'hbase-1.4'.</td>
+    </tr>
+    <tr>
+      <td><h5>table-name</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>The name of HBase table to connect.</td>
+    </tr>
+    <tr>
+      <td><h5>zookeeper.quorum</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>The HBase Zookeeper quorum.</td>
+    </tr>
+    <tr>
+      <td><h5>zookeeper.znode.parent</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">/hbase</td>
+      <td>String</td>
+      <td>The root dir in Zookeeper for HBase cluster</td>
+    </tr>
+    <tr>
+      <td><h5>null-string-literal</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">null</td>
+      <td>String</td>
+      <td>Representation for null values for string fields. HBase source and sink encodes/decodes empty bytes as null values for all types except string type.</td>
+    </tr>
+    <tr>
+      <td><h5>sink.buffer-flush.max-size</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">2mb</td>
+      <td>MemorySize</td>
+      <td>Writing option, determines how many size in memory of buffered rows to insert per round trip.
+      This can improve performance for writing data to HBase database, but may increase the latency.

Review comment:
       I changed this to `maximum size in memory of buffered rows per writing request`, I think we should make the `size in memory` more visible to users, what do you think? 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on a change in pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
wuchong commented on a change in pull request #12386:
URL: https://github.com/apache/flink/pull/12386#discussion_r433606653



##########
File path: docs/dev/table/connectors/index.md
##########
@@ -0,0 +1,268 @@
+---
+title: "Table & SQL Connectors"
+nav-id: sql-connectors
+nav-parent_id: connectors-root
+nav-pos: 2
+nav-show_overview: true
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+Flink's Table API & SQL programs can be connected to other external systems for reading and writing both batch and streaming tables. A table source provides access to data which is stored in external systems (such as a database, key-value store, message queue, or file system). A table sink emits a table to an external storage system. Depending on the type of source and sink, they support different formats such as CSV, Avro, Parquet, or ORC.
+
+This page describes how to declare built-in table sources and/or table sinks and register them in Flink. After a source or sink has been registered, it can be accessed by Table API & SQL statements.
+
+<span class="label label-info">NOTE</span> If you want to implement your own *custom* table source or sink, have a look at the [user-defined sources & sinks page](sourceSinks.html).
+
+<span class="label label-danger">Attention</span> Flink Table & SQL introduces a new set of connector options since 1.11.0, if you are using the legacy connector options, please refer to the [legacy documentation]({{ site.baseurl }}/dev/table/connect.html).
+
+* This will be replaced by the TOC
+{:toc}
+
+Supported Connectors
+------------
+
+Flink natively support various connectors. The following tables list all available connectors.
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left">Name</th>
+        <th class="text-center">Version</th>
+        <th class="text-center">Source</th>
+        <th class="text-center">Sink</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td>Filesystem</td>
+      <td></td>
+      <td>Bounded and Unbounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Elasticsearch</td>
+      <td>6.x & 7.x</td>
+      <td>Not supported</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Apache Kafka</td>
+      <td>0.10+</td>
+      <td>Unbounded Scan</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>JDBC</td>
+      <td></td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td><a href="{{ site.baseurl }}/dev/table/connectors/hbase.html">Apache HBase</a></td>
+      <td>1.4.x</td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    </tbody>
+</table>
+
+{% top %}
+
+How to use connectors
+--------
+
+Flink supports to use SQL CREATE TABLE statement to register a table. One can define the name of the table, the schema of the table, the connector options for connecting to an external system.
+
+The following code shows a full example of how to connect to Kafka for reading Json records.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  -- declare the schema of the table
+  `user` BIGINT,
+  message STRING,
+  ts TIMESTAMP,
+  proctime AS PROCTIME(), -- use computed column to define proctime attribute
+  WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- use WATERMARK statement to define rowtime attribute
+) WITH (
+  -- declare the external system to connect to
+  'connector' = 'kafka',
+  'topic' = 'topic_name',
+  'scan.startup.mode' = 'earliest-offset',
+  'properties.bootstrap.servers' = 'localhost:9092',
+  'format' = 'json'   -- declare a format for this system
+)
+{% endhighlight %}
+</div>
+</div>
+
+In this ways the desired connection properties are converted into normalized, string-based key-value pairs. So-called [table factories](sourceSinks.html#define-a-tablefactory) create configured table sources, table sinks, and corresponding formats from the key-value pairs. All table factories that can be found via Java's [Service Provider Interfaces (SPI)](https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html) are taken into account when searching for exactly-one matching table factory.
+

Review comment:
       I copied this paragraph from original connector page. I think it means to lower case?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12386:
URL: https://github.com/apache/flink/pull/12386#issuecomment-635398532


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "590be0e43b1f9e656fea887e8ef15c3aae07c55e",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2384",
       "triggerID" : "590be0e43b1f9e656fea887e8ef15c3aae07c55e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4068c3b38f5b7aff860b599df18e7058d75a5c86",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2533",
       "triggerID" : "4068c3b38f5b7aff860b599df18e7058d75a5c86",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 4068c3b38f5b7aff860b599df18e7058d75a5c86 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2533) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #12386:
URL: https://github.com/apache/flink/pull/12386#issuecomment-635381848


   Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress of the review.
   
   
   ## Automated Checks
   Last check on commit 590be0e43b1f9e656fea887e8ef15c3aae07c55e (Thu May 28 14:23:52 UTC 2020)
   
    ✅no warnings
   
   <sub>Mention the bot in a comment to re-run the automated checks.</sub>
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process.<details>
    The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`)
    - `@flinkbot approve all` to approve all aspects
    - `@flinkbot approve-until architecture` to approve everything until `architecture`
    - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention
    - `@flinkbot disapprove architecture` to remove an approval you gave earlier
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12386:
URL: https://github.com/apache/flink/pull/12386#issuecomment-635398532


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "590be0e43b1f9e656fea887e8ef15c3aae07c55e",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2384",
       "triggerID" : "590be0e43b1f9e656fea887e8ef15c3aae07c55e",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4068c3b38f5b7aff860b599df18e7058d75a5c86",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2533",
       "triggerID" : "4068c3b38f5b7aff860b599df18e7058d75a5c86",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 590be0e43b1f9e656fea887e8ef15c3aae07c55e Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2384) 
   * 4068c3b38f5b7aff860b599df18e7058d75a5c86 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2533) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
wuchong commented on pull request #12386:
URL: https://github.com/apache/flink/pull/12386#issuecomment-637258814


   Thanks for the reviewing @danny0405 , I have updated the PR. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12386:
URL: https://github.com/apache/flink/pull/12386#issuecomment-635398532


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "590be0e43b1f9e656fea887e8ef15c3aae07c55e",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2384",
       "triggerID" : "590be0e43b1f9e656fea887e8ef15c3aae07c55e",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 590be0e43b1f9e656fea887e8ef15c3aae07c55e Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2384) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #12386:
URL: https://github.com/apache/flink/pull/12386#issuecomment-635398532


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "590be0e43b1f9e656fea887e8ef15c3aae07c55e",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "590be0e43b1f9e656fea887e8ef15c3aae07c55e",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 590be0e43b1f9e656fea887e8ef15c3aae07c55e UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on a change in pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
wuchong commented on a change in pull request #12386:
URL: https://github.com/apache/flink/pull/12386#discussion_r434329017



##########
File path: docs/dev/table/connectors/index.md
##########
@@ -0,0 +1,268 @@
+---
+title: "Table & SQL Connectors"
+nav-id: sql-connectors
+nav-parent_id: connectors-root
+nav-pos: 2
+nav-show_overview: true
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+Flink's Table API & SQL programs can be connected to other external systems for reading and writing both batch and streaming tables. A table source provides access to data which is stored in external systems (such as a database, key-value store, message queue, or file system). A table sink emits a table to an external storage system. Depending on the type of source and sink, they support different formats such as CSV, Avro, Parquet, or ORC.
+
+This page describes how to declare built-in table sources and/or table sinks and register them in Flink. After a source or sink has been registered, it can be accessed by Table API & SQL statements.
+
+<span class="label label-info">NOTE</span> If you want to implement your own *custom* table source or sink, have a look at the [user-defined sources & sinks page](sourceSinks.html).
+
+<span class="label label-danger">Attention</span> Flink Table & SQL introduces a new set of connector options since 1.11.0, if you are using the legacy connector options, please refer to the [legacy documentation]({{ site.baseurl }}/dev/table/connect.html).
+
+* This will be replaced by the TOC
+{:toc}
+
+Supported Connectors
+------------
+
+Flink natively support various connectors. The following tables list all available connectors.
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left">Name</th>
+        <th class="text-center">Version</th>
+        <th class="text-center">Source</th>
+        <th class="text-center">Sink</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td>Filesystem</td>
+      <td></td>
+      <td>Bounded and Unbounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Elasticsearch</td>
+      <td>6.x & 7.x</td>
+      <td>Not supported</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Apache Kafka</td>
+      <td>0.10+</td>
+      <td>Unbounded Scan</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>JDBC</td>
+      <td></td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td><a href="{{ site.baseurl }}/dev/table/connectors/hbase.html">Apache HBase</a></td>
+      <td>1.4.x</td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    </tbody>
+</table>
+
+{% top %}
+
+How to use connectors
+--------
+
+Flink supports to use SQL CREATE TABLE statement to register a table. One can define the name of the table, the schema of the table, the connector options for connecting to an external system.
+
+The following code shows a full example of how to connect to Kafka for reading Json records.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  -- declare the schema of the table
+  `user` BIGINT,
+  message STRING,
+  ts TIMESTAMP,
+  proctime AS PROCTIME(), -- use computed column to define proctime attribute
+  WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- use WATERMARK statement to define rowtime attribute
+) WITH (
+  -- declare the external system to connect to
+  'connector' = 'kafka',
+  'topic' = 'topic_name',
+  'scan.startup.mode' = 'earliest-offset',
+  'properties.bootstrap.servers' = 'localhost:9092',
+  'format' = 'json'   -- declare a format for this system
+)
+{% endhighlight %}
+</div>
+</div>
+
+In this ways the desired connection properties are converted into normalized, string-based key-value pairs. So-called [table factories](sourceSinks.html#define-a-tablefactory) create configured table sources, table sinks, and corresponding formats from the key-value pairs. All table factories that can be found via Java's [Service Provider Interfaces (SPI)](https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html) are taken into account when searching for exactly-one matching table factory.
+

Review comment:
       I removed `normalized`, because it is about implementation. Users shouldn't care about this. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
wuchong commented on pull request #12386:
URL: https://github.com/apache/flink/pull/12386#issuecomment-638098079


   Thanks for the reviewing @danny0405 . Merging...


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] danny0405 commented on a change in pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
danny0405 commented on a change in pull request #12386:
URL: https://github.com/apache/flink/pull/12386#discussion_r434281449



##########
File path: docs/dev/table/connectors/index.zh.md
##########
@@ -0,0 +1,268 @@
+---
+title: "Table & SQL Connectors"
+nav-id: sql-connectors
+nav-parent_id: connectors-root
+nav-pos: 2
+nav-show_overview: true
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+Flink's Table API & SQL programs can be connected to other external systems for reading and writing both batch and streaming tables. A table source provides access to data which is stored in external systems (such as a database, key-value store, message queue, or file system). A table sink emits a table to an external storage system. Depending on the type of source and sink, they support different formats such as CSV, Avro, Parquet, or ORC.
+
+This page describes how to register table sources and table sinks in Flink using the natively supported connectors. After a source or sink has been registered, it can be accessed by Table API & SQL statements.
+
+<span class="label label-info">NOTE</span> If you want to implement your own *custom* table source or sink, have a look at the [user-defined sources & sinks page](sourceSinks.html).
+
+<span class="label label-danger">Attention</span> Flink Table & SQL introduces a new set of connector options since 1.11.0, if you are using the legacy connector options, please refer to the [legacy documentation]({{ site.baseurl }}/dev/table/connect.html).
+
+* This will be replaced by the TOC
+{:toc}
+
+Supported Connectors
+------------
+
+Flink natively support various connectors. The following tables list all available connectors.
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left">Name</th>
+        <th class="text-center">Version</th>
+        <th class="text-center">Source</th>
+        <th class="text-center">Sink</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td>Filesystem</td>
+      <td></td>
+      <td>Bounded and Unbounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Elasticsearch</td>
+      <td>6.x & 7.x</td>
+      <td>Not supported</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Apache Kafka</td>
+      <td>0.10+</td>
+      <td>Unbounded Scan</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>JDBC</td>
+      <td></td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td><a href="{{ site.baseurl }}/dev/table/connectors/hbase.html">Apache HBase</a></td>
+      <td>1.4.x</td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    </tbody>
+</table>
+
+{% top %}
+
+How to use connectors
+--------
+
+Flink supports to use SQL CREATE TABLE statement to register a table. One can define the name of the table, the schema of the table, the connector options for connecting to an external system.
+
+The following code shows a full example of how to connect to Kafka for reading Json records.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  -- declare the schema of the table
+  `user` BIGINT,
+  message STRING,
+  ts TIMESTAMP,
+  proctime AS PROCTIME(), -- use computed column to define proctime attribute
+  WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- use WATERMARK statement to define rowtime attribute
+) WITH (
+  -- declare the external system to connect to
+  'connector' = 'kafka',
+  'topic' = 'topic_name',
+  'scan.startup.mode' = 'earliest-offset',
+  'properties.bootstrap.servers' = 'localhost:9092',
+  'format' = 'json'   -- declare a format for this system
+)
+{% endhighlight %}
+</div>
+</div>
+
+In this ways the desired connection properties are converted into normalized, string-based key-value pairs. So-called [table factories](sourceSinks.html#define-a-tablefactory) create configured table sources, table sinks, and corresponding formats from the key-value pairs. All table factories that can be found via Java's [Service Provider Interfaces (SPI)](https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html) are taken into account when searching for exactly-one matching table factory.
+
+If no factory can be found or multiple factories match for the given properties, an exception will be thrown with additional information about considered factories and supported properties.
+
+{% top %}
+
+Schema Mapping
+------------
+
+The body clause of a SQL `CREATE TABLE` statement defines the names and types of columns, constraints and watermarks. Flink doesn't hold the data, thus the schema definition only declares how to map types from an external system to Flink’s representation. The mapping may not be mapped by names, it depends on the implementation of formats and connectors. For example, a MySQL database table is mapped by field names (not case sensitive), and a CSV filesystem is mapped by field order (field names can be arbitrary). This will be explained in every connectors.
+
+The following example shows a simple schema without time attributes and one-to-one field mapping of input/output to table columns.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyTable (
+  MyField1 INT,
+  MyField2 STRING,
+  MyField3 BOOLEAN
+) WITH (
+  ...
+)
+{% endhighlight %}
+</div>
+</div>
+
+### Primary Key
+
+Primary key constraints tell that a column or a set of columns of a table are unique and they do not contain nulls. Primary key uniquely identify a row in a table.
+
+The primary key of a source table is a metadata information for optimization. The primary key of a sink table is usually used by the sink implementation for upserting.
+
+SQL standard specifies that a constraint can either be ENFORCED or NOT ENFORCED. This controls if the constraint checks are performed on the incoming/outgoing data. Flink does not own the data therefore the only mode we want to support is the NOT ENFORCED mode. Its up to the user to ensure that the query enforces key integrity.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyTable (
+  MyField1 INT,
+  MyField2 STRING,
+  MyField3 BOOLEAN,
+  PRIMARY KEY (MyField1, MyField2) NOT ENFORCED  -- defines a primary key on columns
+) WITH (
+  ...
+)
+{% endhighlight %}
+</div>
+</div>
+
+### Time Attributes
+
+Time attributes are essential when working with unbounded streaming tables. Therefore both proctime and rowtime attributes can be defined as part of the schema.
+

Review comment:
       Remove the therefore.

##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>

Review comment:
       Okey, i didn't notice that :)

##########
File path: docs/dev/table/connectors/index.md
##########
@@ -0,0 +1,268 @@
+---
+title: "Table & SQL Connectors"
+nav-id: sql-connectors
+nav-parent_id: connectors-root
+nav-pos: 2
+nav-show_overview: true
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+Flink's Table API & SQL programs can be connected to other external systems for reading and writing both batch and streaming tables. A table source provides access to data which is stored in external systems (such as a database, key-value store, message queue, or file system). A table sink emits a table to an external storage system. Depending on the type of source and sink, they support different formats such as CSV, Avro, Parquet, or ORC.
+
+This page describes how to register table sources and table sinks in Flink using the natively supported connectors. After a source or sink has been registered, it can be accessed by Table API & SQL statements.
+
+<span class="label label-info">NOTE</span> If you want to implement your own *custom* table source or sink, have a look at the [user-defined sources & sinks page](sourceSinks.html).
+
+<span class="label label-danger">Attention</span> Flink Table & SQL introduces a new set of connector options since 1.11.0, if you are using the legacy connector options, please refer to the [legacy documentation]({{ site.baseurl }}/dev/table/connect.html).
+
+* This will be replaced by the TOC
+{:toc}
+
+Supported Connectors
+------------
+
+Flink natively support various connectors. The following tables list all available connectors.
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left">Name</th>
+        <th class="text-center">Version</th>
+        <th class="text-center">Source</th>
+        <th class="text-center">Sink</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td>Filesystem</td>
+      <td></td>
+      <td>Bounded and Unbounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Elasticsearch</td>
+      <td>6.x & 7.x</td>
+      <td>Not supported</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Apache Kafka</td>
+      <td>0.10+</td>
+      <td>Unbounded Scan</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>JDBC</td>
+      <td></td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td><a href="{{ site.baseurl }}/dev/table/connectors/hbase.html">Apache HBase</a></td>
+      <td>1.4.x</td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    </tbody>
+</table>
+
+{% top %}
+
+How to use connectors
+--------
+
+Flink supports to use SQL CREATE TABLE statement to register a table. One can define the name of the table, the schema of the table, the connector options for connecting to an external system.
+

Review comment:
       `One can defin ...` -> `One can define the table name, the table schema and the table options for ...`

##########
File path: docs/dev/table/connectors/index.zh.md
##########
@@ -0,0 +1,268 @@
+---
+title: "Table & SQL Connectors"
+nav-id: sql-connectors
+nav-parent_id: connectors-root
+nav-pos: 2
+nav-show_overview: true
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+Flink's Table API & SQL programs can be connected to other external systems for reading and writing both batch and streaming tables. A table source provides access to data which is stored in external systems (such as a database, key-value store, message queue, or file system). A table sink emits a table to an external storage system. Depending on the type of source and sink, they support different formats such as CSV, Avro, Parquet, or ORC.
+
+This page describes how to register table sources and table sinks in Flink using the natively supported connectors. After a source or sink has been registered, it can be accessed by Table API & SQL statements.
+
+<span class="label label-info">NOTE</span> If you want to implement your own *custom* table source or sink, have a look at the [user-defined sources & sinks page](sourceSinks.html).
+
+<span class="label label-danger">Attention</span> Flink Table & SQL introduces a new set of connector options since 1.11.0, if you are using the legacy connector options, please refer to the [legacy documentation]({{ site.baseurl }}/dev/table/connect.html).
+
+* This will be replaced by the TOC
+{:toc}
+
+Supported Connectors
+------------
+
+Flink natively support various connectors. The following tables list all available connectors.
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left">Name</th>
+        <th class="text-center">Version</th>
+        <th class="text-center">Source</th>
+        <th class="text-center">Sink</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td>Filesystem</td>
+      <td></td>
+      <td>Bounded and Unbounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Elasticsearch</td>
+      <td>6.x & 7.x</td>
+      <td>Not supported</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Apache Kafka</td>
+      <td>0.10+</td>
+      <td>Unbounded Scan</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>JDBC</td>
+      <td></td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td><a href="{{ site.baseurl }}/dev/table/connectors/hbase.html">Apache HBase</a></td>
+      <td>1.4.x</td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    </tbody>
+</table>
+
+{% top %}
+
+How to use connectors
+--------
+
+Flink supports to use SQL CREATE TABLE statement to register a table. One can define the name of the table, the schema of the table, the connector options for connecting to an external system.
+
+The following code shows a full example of how to connect to Kafka for reading Json records.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  -- declare the schema of the table
+  `user` BIGINT,
+  message STRING,
+  ts TIMESTAMP,
+  proctime AS PROCTIME(), -- use computed column to define proctime attribute
+  WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- use WATERMARK statement to define rowtime attribute
+) WITH (
+  -- declare the external system to connect to
+  'connector' = 'kafka',
+  'topic' = 'topic_name',
+  'scan.startup.mode' = 'earliest-offset',
+  'properties.bootstrap.servers' = 'localhost:9092',
+  'format' = 'json'   -- declare a format for this system
+)
+{% endhighlight %}
+</div>
+</div>
+
+In this ways the desired connection properties are converted into normalized, string-based key-value pairs. So-called [table factories](sourceSinks.html#define-a-tablefactory) create configured table sources, table sinks, and corresponding formats from the key-value pairs. All table factories that can be found via Java's [Service Provider Interfaces (SPI)](https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html) are taken into account when searching for exactly-one matching table factory.
+
+If no factory can be found or multiple factories match for the given properties, an exception will be thrown with additional information about considered factories and supported properties.
+
+{% top %}
+
+Schema Mapping
+------------
+
+The body clause of a SQL `CREATE TABLE` statement defines the names and types of columns, constraints and watermarks. Flink doesn't hold the data, thus the schema definition only declares how to map types from an external system to Flink’s representation. The mapping may not be mapped by names, it depends on the implementation of formats and connectors. For example, a MySQL database table is mapped by field names (not case sensitive), and a CSV filesystem is mapped by field order (field names can be arbitrary). This will be explained in every connectors.
+
+The following example shows a simple schema without time attributes and one-to-one field mapping of input/output to table columns.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyTable (
+  MyField1 INT,
+  MyField2 STRING,
+  MyField3 BOOLEAN
+) WITH (
+  ...
+)
+{% endhighlight %}
+</div>
+</div>
+
+### Primary Key
+
+Primary key constraints tell that a column or a set of columns of a table are unique and they do not contain nulls. Primary key uniquely identify a row in a table.
+

Review comment:
       identify -> identifies.

##########
File path: docs/dev/table/connectors/index.md
##########
@@ -0,0 +1,268 @@
+---
+title: "Table & SQL Connectors"
+nav-id: sql-connectors
+nav-parent_id: connectors-root
+nav-pos: 2
+nav-show_overview: true
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+Flink's Table API & SQL programs can be connected to other external systems for reading and writing both batch and streaming tables. A table source provides access to data which is stored in external systems (such as a database, key-value store, message queue, or file system). A table sink emits a table to an external storage system. Depending on the type of source and sink, they support different formats such as CSV, Avro, Parquet, or ORC.
+
+This page describes how to declare built-in table sources and/or table sinks and register them in Flink. After a source or sink has been registered, it can be accessed by Table API & SQL statements.
+
+<span class="label label-info">NOTE</span> If you want to implement your own *custom* table source or sink, have a look at the [user-defined sources & sinks page](sourceSinks.html).
+
+<span class="label label-danger">Attention</span> Flink Table & SQL introduces a new set of connector options since 1.11.0, if you are using the legacy connector options, please refer to the [legacy documentation]({{ site.baseurl }}/dev/table/connect.html).
+
+* This will be replaced by the TOC
+{:toc}
+
+Supported Connectors
+------------
+
+Flink natively support various connectors. The following tables list all available connectors.
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left">Name</th>
+        <th class="text-center">Version</th>
+        <th class="text-center">Source</th>
+        <th class="text-center">Sink</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td>Filesystem</td>
+      <td></td>
+      <td>Bounded and Unbounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Elasticsearch</td>
+      <td>6.x & 7.x</td>
+      <td>Not supported</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Apache Kafka</td>
+      <td>0.10+</td>
+      <td>Unbounded Scan</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>JDBC</td>
+      <td></td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td><a href="{{ site.baseurl }}/dev/table/connectors/hbase.html">Apache HBase</a></td>
+      <td>1.4.x</td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    </tbody>
+</table>
+
+{% top %}
+
+How to use connectors
+--------
+
+Flink supports to use SQL CREATE TABLE statement to register a table. One can define the name of the table, the schema of the table, the connector options for connecting to an external system.
+
+The following code shows a full example of how to connect to Kafka for reading Json records.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  -- declare the schema of the table
+  `user` BIGINT,
+  message STRING,
+  ts TIMESTAMP,
+  proctime AS PROCTIME(), -- use computed column to define proctime attribute
+  WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- use WATERMARK statement to define rowtime attribute
+) WITH (
+  -- declare the external system to connect to
+  'connector' = 'kafka',
+  'topic' = 'topic_name',
+  'scan.startup.mode' = 'earliest-offset',
+  'properties.bootstrap.servers' = 'localhost:9092',
+  'format' = 'json'   -- declare a format for this system
+)
+{% endhighlight %}
+</div>
+</div>
+
+In this ways the desired connection properties are converted into normalized, string-based key-value pairs. So-called [table factories](sourceSinks.html#define-a-tablefactory) create configured table sources, table sinks, and corresponding formats from the key-value pairs. All table factories that can be found via Java's [Service Provider Interfaces (SPI)](https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html) are taken into account when searching for exactly-one matching table factory.
+

Review comment:
       `In this ways` -> `In these ways` or `In this way`

##########
File path: docs/dev/table/connectors/index.md
##########
@@ -0,0 +1,268 @@
+---
+title: "Table & SQL Connectors"
+nav-id: sql-connectors
+nav-parent_id: connectors-root
+nav-pos: 2
+nav-show_overview: true
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+Flink's Table API & SQL programs can be connected to other external systems for reading and writing both batch and streaming tables. A table source provides access to data which is stored in external systems (such as a database, key-value store, message queue, or file system). A table sink emits a table to an external storage system. Depending on the type of source and sink, they support different formats such as CSV, Avro, Parquet, or ORC.
+
+This page describes how to declare built-in table sources and/or table sinks and register them in Flink. After a source or sink has been registered, it can be accessed by Table API & SQL statements.
+
+<span class="label label-info">NOTE</span> If you want to implement your own *custom* table source or sink, have a look at the [user-defined sources & sinks page](sourceSinks.html).
+
+<span class="label label-danger">Attention</span> Flink Table & SQL introduces a new set of connector options since 1.11.0, if you are using the legacy connector options, please refer to the [legacy documentation]({{ site.baseurl }}/dev/table/connect.html).
+
+* This will be replaced by the TOC
+{:toc}
+
+Supported Connectors
+------------
+
+Flink natively support various connectors. The following tables list all available connectors.
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left">Name</th>
+        <th class="text-center">Version</th>
+        <th class="text-center">Source</th>
+        <th class="text-center">Sink</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td>Filesystem</td>
+      <td></td>
+      <td>Bounded and Unbounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Elasticsearch</td>
+      <td>6.x & 7.x</td>
+      <td>Not supported</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>Apache Kafka</td>
+      <td>0.10+</td>
+      <td>Unbounded Scan</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td>JDBC</td>
+      <td></td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    <tr>
+      <td><a href="{{ site.baseurl }}/dev/table/connectors/hbase.html">Apache HBase</a></td>
+      <td>1.4.x</td>
+      <td>Bounded Scan, Lookup</td>
+      <td>Streaming Sink, Batch Sink</td>
+    </tr>
+    </tbody>
+</table>
+
+{% top %}
+
+How to use connectors
+--------
+
+Flink supports to use SQL CREATE TABLE statement to register a table. One can define the name of the table, the schema of the table, the connector options for connecting to an external system.
+
+The following code shows a full example of how to connect to Kafka for reading Json records.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  -- declare the schema of the table
+  `user` BIGINT,
+  message STRING,
+  ts TIMESTAMP,
+  proctime AS PROCTIME(), -- use computed column to define proctime attribute
+  WATERMARK FOR ts AS ts - INTERVAL '5' SECOND  -- use WATERMARK statement to define rowtime attribute
+) WITH (
+  -- declare the external system to connect to
+  'connector' = 'kafka',
+  'topic' = 'topic_name',
+  'scan.startup.mode' = 'earliest-offset',
+  'properties.bootstrap.servers' = 'localhost:9092',
+  'format' = 'json'   -- declare a format for this system
+)
+{% endhighlight %}
+</div>
+</div>
+
+In this ways the desired connection properties are converted into normalized, string-based key-value pairs. So-called [table factories](sourceSinks.html#define-a-tablefactory) create configured table sources, table sinks, and corresponding formats from the key-value pairs. All table factories that can be found via Java's [Service Provider Interfaces (SPI)](https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html) are taken into account when searching for exactly-one matching table factory.
+

Review comment:
       To lower case seems better, the `normalized` is confusing.

##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>
+<span class="label label-primary">Sink: Streaming Upsert Mode</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+The HBase connector allows for reading from and writing to an HBase cluster. This document describes how to setup the HBase Connector to run SQL queries against HBase.
+
+The connector can operate in upsert mode for exchange changelog messages with the external system using a primary key defined on the DDL. But the primary key can only be defined on the HBase rowkey field. If the PRIMARY KEY clause is not declared, the HBase connector will take rowkey as the primary key by default.
+
+<span class="label label-danger">Attention</span> HBase as a Lookup Source does not use any caching; data is always queried directly through the HBase client.
+
+Dependencies
+------------
+
+In order to setup the HBase connector, the following table provide dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.
+
+{% if site.is_stable %}
+
+| HBase Version       | Maven dependency                                          | SQL Client JAR         |
+| :------------------ | :-------------------------------------------------------- | :----------------------|
+| 1.4.x               | `flink-connector-hbase{{site.scala_version_suffix}}`     | [Download](https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-hbase{{site.scala_version_suffix}}/{{site.version}}/flink-connector-hbase{{site.scala_version_suffix}}-{{site.version}}.jar) |
+
+{% else %}
+
+The dependency table is only available for stable releases.
+
+{% endif %}
+
+How to create an HBase table
+----------------
+
+All the column families in HBase table must be declared as ROW type, the field name maps to the column family name, and the nested field names map to the column qualifier names. There is no need to declare all the families and qualifiers in the schema, users can declare what’s necessary. Except the ROW type fields, the only one field of atomic type (e.g. STRING, BIGINT) will be recognized as HBase rowkey. The rowkey field can be arbitrary name.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE hTable (
+ rowkey INT,
+ family1 ROW<q1 INT>,
+ family2 ROW<q2 STRING, q3 BIGINT>,
+ family3 ROW<q4 DOUBLE, q5 BOOLEAN, q6 STRING>,
+ PRIMARY KEY (rowkey) NOT ENFORCED
+) WITH (
+ 'connector' = 'hbase-1.4',
+ 'table-name' = 'mytable',
+ 'zookeeper.quorum' = 'localhost:2121'
+)
+{% endhighlight %}
+</div>
+</div>
+
+Connector Options
+----------------
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left" style="width: 25%">Option</th>
+        <th class="text-center" style="width: 8%">Required</th>
+        <th class="text-center" style="width: 7%">Default</th>
+        <th class="text-center" style="width: 10%">Type</th>
+        <th class="text-center" style="width: 50%">Description</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td><h5>connector</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>Specify what connector to use, here should be 'hbase-1.4'.</td>
+    </tr>
+    <tr>
+      <td><h5>table-name</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>The name of HBase table to connect.</td>
+    </tr>
+    <tr>
+      <td><h5>zookeeper.quorum</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>The HBase Zookeeper quorum.</td>
+    </tr>
+    <tr>
+      <td><h5>zookeeper.znode.parent</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">/hbase</td>
+      <td>String</td>
+      <td>The root dir in Zookeeper for HBase cluster</td>
+    </tr>
+    <tr>
+      <td><h5>null-string-literal</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">null</td>
+      <td>String</td>
+      <td>Representation for null values for string fields. HBase source and sink encodes/decodes empty bytes as null values for all types except string type.</td>
+    </tr>
+    <tr>
+      <td><h5>sink.buffer-flush.max-size</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">2mb</td>
+      <td>MemorySize</td>
+      <td>Writing option, determines how many size in memory of buffered rows to insert per round trip.
+      This can improve performance for writing data to HBase database, but may increase the latency.

Review comment:
       The latest change looks good.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on a change in pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
wuchong commented on a change in pull request #12386:
URL: https://github.com/apache/flink/pull/12386#discussion_r433604088



##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>
+<span class="label label-primary">Sink: Streaming Upsert Mode</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+The HBase connector allows for reading from and writing to an HBase cluster. This document describes how to setup the HBase Connector to run SQL queries against HBase.
+
+The connector can operate in upsert mode for exchange changelog messages with the external system using a primary key defined on the DDL. But the primary key can only be defined on the HBase rowkey field. If the PRIMARY KEY clause is not declared, the HBase connector will take rowkey as the primary key by default.
+
+<span class="label label-danger">Attention</span> HBase as a Lookup Source does not use any caching; data is always queried directly through the HBase client.
+
+Dependencies
+------------
+
+In order to setup the HBase connector, the following table provide dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.
+
+{% if site.is_stable %}
+
+| HBase Version       | Maven dependency                                          | SQL Client JAR         |
+| :------------------ | :-------------------------------------------------------- | :----------------------|
+| 1.4.x               | `flink-connector-hbase{{site.scala_version_suffix}}`     | [Download](https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-hbase{{site.scala_version_suffix}}/{{site.version}}/flink-connector-hbase{{site.scala_version_suffix}}-{{site.version}}.jar) |
+
+{% else %}
+
+The dependency table is only available for stable releases.
+
+{% endif %}
+
+How to create an HBase table
+----------------
+
+All the column families in HBase table must be declared as ROW type, the field name maps to the column family name, and the nested field names map to the column qualifier names. There is no need to declare all the families and qualifiers in the schema, users can declare what’s necessary. Except the ROW type fields, the only one field of atomic type (e.g. STRING, BIGINT) will be recognized as HBase rowkey. The rowkey field can be arbitrary name.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE hTable (
+ rowkey INT,
+ family1 ROW<q1 INT>,
+ family2 ROW<q2 STRING, q3 BIGINT>,
+ family3 ROW<q4 DOUBLE, q5 BOOLEAN, q6 STRING>,
+ PRIMARY KEY (rowkey) NOT ENFORCED
+) WITH (
+ 'connector' = 'hbase-1.4',
+ 'table-name' = 'mytable',
+ 'zookeeper.quorum' = 'localhost:2121'
+)
+{% endhighlight %}
+</div>
+</div>
+
+Connector Options
+----------------
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left" style="width: 25%">Option</th>
+        <th class="text-center" style="width: 8%">Required</th>
+        <th class="text-center" style="width: 7%">Default</th>
+        <th class="text-center" style="width: 10%">Type</th>
+        <th class="text-center" style="width: 50%">Description</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td><h5>connector</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>Specify what connector to use, here should be 'hbase-1.4'.</td>
+    </tr>
+    <tr>
+      <td><h5>table-name</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>The name of HBase table to connect.</td>
+    </tr>
+    <tr>
+      <td><h5>zookeeper.quorum</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>The HBase Zookeeper quorum.</td>
+    </tr>
+    <tr>
+      <td><h5>zookeeper.znode.parent</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">/hbase</td>
+      <td>String</td>
+      <td>The root dir in Zookeeper for HBase cluster</td>
+    </tr>
+    <tr>
+      <td><h5>null-string-literal</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">null</td>
+      <td>String</td>
+      <td>Representation for null values for string fields. HBase source and sink encodes/decodes empty bytes as null values for all types except string type.</td>
+    </tr>
+    <tr>
+      <td><h5>sink.buffer-flush.max-size</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">2mb</td>
+      <td>MemorySize</td>
+      <td>Writing option, determines how many size in memory of buffered rows to insert per round trip.
+      This can improve performance for writing data to HBase database, but may increase the latency.

Review comment:
       I changed this to `maximum size in memory of buffered rows for each writing request`, I think we should make the `size in memory` more visible to users, what do you think? 

##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>
+<span class="label label-primary">Sink: Streaming Upsert Mode</span>
+
+* This will be replaced by the TOC
+{:toc}
+
+The HBase connector allows for reading from and writing to an HBase cluster. This document describes how to setup the HBase Connector to run SQL queries against HBase.
+
+The connector can operate in upsert mode for exchange changelog messages with the external system using a primary key defined on the DDL. But the primary key can only be defined on the HBase rowkey field. If the PRIMARY KEY clause is not declared, the HBase connector will take rowkey as the primary key by default.
+
+<span class="label label-danger">Attention</span> HBase as a Lookup Source does not use any caching; data is always queried directly through the HBase client.
+
+Dependencies
+------------
+
+In order to setup the HBase connector, the following table provide dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.
+
+{% if site.is_stable %}
+
+| HBase Version       | Maven dependency                                          | SQL Client JAR         |
+| :------------------ | :-------------------------------------------------------- | :----------------------|
+| 1.4.x               | `flink-connector-hbase{{site.scala_version_suffix}}`     | [Download](https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-hbase{{site.scala_version_suffix}}/{{site.version}}/flink-connector-hbase{{site.scala_version_suffix}}-{{site.version}}.jar) |
+
+{% else %}
+
+The dependency table is only available for stable releases.
+
+{% endif %}
+
+How to create an HBase table
+----------------
+
+All the column families in HBase table must be declared as ROW type, the field name maps to the column family name, and the nested field names map to the column qualifier names. There is no need to declare all the families and qualifiers in the schema, users can declare what’s necessary. Except the ROW type fields, the only one field of atomic type (e.g. STRING, BIGINT) will be recognized as HBase rowkey. The rowkey field can be arbitrary name.
+
+<div class="codetabs" markdown="1">
+<div data-lang="SQL" markdown="1">
+{% highlight sql %}
+CREATE TABLE hTable (
+ rowkey INT,
+ family1 ROW<q1 INT>,
+ family2 ROW<q2 STRING, q3 BIGINT>,
+ family3 ROW<q4 DOUBLE, q5 BOOLEAN, q6 STRING>,
+ PRIMARY KEY (rowkey) NOT ENFORCED
+) WITH (
+ 'connector' = 'hbase-1.4',
+ 'table-name' = 'mytable',
+ 'zookeeper.quorum' = 'localhost:2121'
+)
+{% endhighlight %}
+</div>
+</div>
+
+Connector Options
+----------------
+
+<table class="table table-bordered">
+    <thead>
+      <tr>
+        <th class="text-left" style="width: 25%">Option</th>
+        <th class="text-center" style="width: 8%">Required</th>
+        <th class="text-center" style="width: 7%">Default</th>
+        <th class="text-center" style="width: 10%">Type</th>
+        <th class="text-center" style="width: 50%">Description</th>
+      </tr>
+    </thead>
+    <tbody>
+    <tr>
+      <td><h5>connector</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>Specify what connector to use, here should be 'hbase-1.4'.</td>
+    </tr>
+    <tr>
+      <td><h5>table-name</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>The name of HBase table to connect.</td>
+    </tr>
+    <tr>
+      <td><h5>zookeeper.quorum</h5></td>
+      <td>required</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>String</td>
+      <td>The HBase Zookeeper quorum.</td>
+    </tr>
+    <tr>
+      <td><h5>zookeeper.znode.parent</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">/hbase</td>
+      <td>String</td>
+      <td>The root dir in Zookeeper for HBase cluster</td>
+    </tr>
+    <tr>
+      <td><h5>null-string-literal</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">null</td>
+      <td>String</td>
+      <td>Representation for null values for string fields. HBase source and sink encodes/decodes empty bytes as null values for all types except string type.</td>
+    </tr>
+    <tr>
+      <td><h5>sink.buffer-flush.max-size</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">2mb</td>
+      <td>MemorySize</td>
+      <td>Writing option, determines how many size in memory of buffered rows to insert per round trip.
+      This can improve performance for writing data to HBase database, but may increase the latency.
+      </td>
+    </tr>
+    <tr>
+      <td><h5>sink.buffer-flush.max-rows</h5></td>
+      <td>optional</td>
+      <td style="word-wrap: break-word;">(none)</td>
+      <td>Integer</td>
+      <td>Writing option, determines how many number of rows to insert per round trip.
+      This can improve performance for writing data to HBase database, but may increase the latency.

Review comment:
       I changed this to `maximum number of rows to buffer for each writing request`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wuchong commented on a change in pull request #12386: [FLINK-17995][docs][connectors] Redesign Table & SQL Connectors page and add HBase connector documentation

Posted by GitBox <gi...@apache.org>.
wuchong commented on a change in pull request #12386:
URL: https://github.com/apache/flink/pull/12386#discussion_r433601307



##########
File path: docs/dev/table/connectors/hbase.md
##########
@@ -0,0 +1,291 @@
+---
+title: "HBase SQL Connector"
+nav-title: HBase
+nav-parent_id: sql-connectors
+nav-pos: 9
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<span class="label label-primary">Scan Source: Bounded</span>
+<span class="label label-primary">Lookup Source: Sync Mode</span>
+<span class="label label-primary">Sink: Batch</span>

Review comment:
       In the new set of APIs, it is called `LookupTableSource` and `ScanTableSource`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org