You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@inlong.apache.org by do...@apache.org on 2022/04/16 14:17:06 UTC
[incubator-inlong-website] 01/01: [INLONG-3767][Doc] Add supported data nodes guide for InLong
This is an automated email from the ASF dual-hosted git repository.
dockerzhang pushed a commit to branch fix-3767
in repository https://gitbox.apache.org/repos/asf/incubator-inlong-website.git
commit 289511ab0c5d81ab921588f319cad98b6c876817
Author: dockerzhang <do...@apache.org>
AuthorDate: Sat Apr 16 22:16:55 2022 +0800
[INLONG-3767][Doc] Add supported data nodes guide for InLong
---
docs/introduction.md | 13 +++++++++++++
docs/modules/manager/quick_start.md | 2 ++
docs/modules/sort/quick_start.md | 4 ++--
.../current/introduction.md | 17 ++++++++++++++++-
.../current/modules/manager/quick_start.md | 2 ++
.../current/modules/sort/quick_start.md | 4 ++--
6 files changed, 37 insertions(+), 5 deletions(-)
diff --git a/docs/introduction.md b/docs/introduction.md
index e1444b7e6..8c339ec81 100644
--- a/docs/introduction.md
+++ b/docs/introduction.md
@@ -50,3 +50,16 @@ Apache InLong serves the entire life cycle from data collection to landing, and
- **inlong-manager**, provides complete data service management and control capabilities, including metadata, OpenAPI, task flow, authority, etc.
- **inlong-dashboard**, a front-end page for managing data access, simplifying the use of the entire InLong control platform.
- **inlong-audit**, performs real-time audit and reconciliation on the incoming and outgoing traffic of the Agent, DataProxy, and Sort modules of the InLong system.
+
+## Supported Data Nodes (Updating)
+| Type | Name | Version | Other |
+|--------------|------------------|--------------|-------------------------------------------------------------------------------------------------------------------|
+| Extract Node | Auto Push | None | Using [SDK](https://inlong.apache.org/docs/next/sdk/dataproxy-sdk/example) to send |
+| | File | None | CSV, Key-Value, JSON, Avro |
+| | Kafka | 2.x | Canal JSON |
+| | MySQL | 5.x, 8.x | Debezium JSON |
+| Load Node | Auto Consumption | None | Using MQ SDK consume messages and [Parse InLongMsg](https://inlong.apache.org/docs/next/development/inlong_msg) |
+| | Hive | 2.x | TextFile, SequenceFile,OrcFile, Parquet, Avro |
+| | Iceberg | 0.12.x | Parquet, Orc, Avro |
+| | ClickHouse | v20+ | Canal JSON |
+| | Kafka | 2.x | JSON, Canal, Avro |
\ No newline at end of file
diff --git a/docs/modules/manager/quick_start.md b/docs/modules/manager/quick_start.md
index edc932809..e02f01cd2 100644
--- a/docs/modules/manager/quick_start.md
+++ b/docs/modules/manager/quick_start.md
@@ -45,6 +45,8 @@ flink.rest.address=127.0.0.1
flink.rest.port=8081
# Flink jobmanager port
flink.jobmanager.port=6123
+# InLong Audit Proxy Address
+metrics.audit.proxy.hosts=127.0.0.1:10081
```
## Start
diff --git a/docs/modules/sort/quick_start.md b/docs/modules/sort/quick_start.md
index 3ebe017e0..700ae6bee 100644
--- a/docs/modules/sort/quick_start.md
+++ b/docs/modules/sort/quick_start.md
@@ -19,7 +19,7 @@ Now you can submit job to flink with the jar compiled, refer to [how to submit j
Example:
```
-./bin/flink run -c org.apache.inlong.sort.flink.Entrance inlong-sort/sort-[version].jar \
+./bin/flink run -c org.apache.inlong.sort.singletenant.flink.Entrance inlong-sort/sort-[version].jar \
--cluster-id debezium2hive --dataflow.info.file /YOUR_DATAFLOW_INFO_DIR/debezium-to-hive.json \
--source.type pulsar --sink.type hive --sink.hive.rolling-policy.rollover-interval 60000 \
--metrics.audit.proxy.hosts 127.0.0.1:10081 --sink.hive.rolling-policy.check-interval 30000
@@ -27,7 +27,7 @@ Example:
Notice:
-- `-c org.apache.inlong.sort.flink.Entrance` is the main class name
+- `-c org.apache.inlong.sort.singletenant.flink.Entrance` is the main class name
- `inlong-sort/sort-[version].jar` is the compiled jar
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/introduction.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/introduction.md
index 8c8a59a1b..d2545a237 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/introduction.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/introduction.md
@@ -43,4 +43,19 @@ Apache InLong 服务于数据采集到落地的整个生命周期,按数据的
- **inlong-sort**,对从不同的 MQ 消费到的数据进行 ETL 处理,然后汇聚并写入 Hive、ClickHouse、Hbase、Iceberg 等存储系统。
- **inlong-manager**,提供完整的数据服务管控能力,包括元数据、任务流、权限,OpenAPI 等。
- **inlong-dashboard**,用于管理数据接入的前端页面,简化整个 InLong 管控平台的使用。
-- **inlong-audit**,对InLong系统的Agent、DataProxy、Sort模块的入流量、出流量进行实时审计对账。
\ No newline at end of file
+- **inlong-audit**,对InLong系统的Agent、DataProxy、Sort模块的入流量、出流量进行实时审计对账。
+
+## 已支持数据节点(更新中)
+| 类型 | 名称 | 版本 | 备注 |
+|--------------|---------------|--------------|---------------------------------------------------------------------------------------------------------------|
+| Extract Node | 自主推送 | 无 | 使用 [SDK](https://inlong.apache.org/zh-CN/docs/next/sdk/dataproxy-sdk/example) 发送 |
+| | File | 无 | CSV, Key-Value, JSON, Avro |
+| | Kafka | 2.x | Canal JSON |
+| | MySQL | 5.x, 8.x | Debezium JSON |
+| Load Node | 自主消费 | 无 | 使用 MQ SDK 消费后再[解析 InLongMsg](https://inlong.apache.org/zh-CN/docs/next/development/inlong_msg) |
+| | Hive | 2.x | TextFile, SequenceFile,OrcFile, Parquet, Avro |
+| | Iceberg | 0.12.x | Parquet, Orc, Avro |
+| | ClickHouse | v20+ | Canal JSON |
+| | Kafka | 2.x | JSON, Canal, Avro |
+
+
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/manager/quick_start.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/manager/quick_start.md
index cf2362f42..568d8c1f8 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/manager/quick_start.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/manager/quick_start.md
@@ -44,6 +44,8 @@ flink.rest.address=127.0.0.1
flink.rest.port=8081
# Flink jobmanager port
flink.jobmanager.port=6123
+# InLong Audit Proxy Address
+metrics.audit.proxy.hosts=127.0.0.1:10081
```
## 启动
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/quick_start.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/quick_start.md
index 0e501049c..323397a92 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/quick_start.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/modules/sort/quick_start.md
@@ -18,7 +18,7 @@ flink环境配置完成后,可以通过浏览器访问flink的web ui,对应
示例:
```
-./bin/flink run -c org.apache.inlong.sort.flink.Entrance inlong-sort/sort-[version].jar \
+./bin/flink run -c org.apache.inlong.sort.singletenant.flink.Entrance inlong-sort/sort-[version].jar \
--cluster-id debezium2hive --dataflow.info.file /YOUR_DATAFLOW_INFO_DIR/debezium-to-hive.json \
--source.type pulsar --sink.type hive --sink.hive.rolling-policy.rollover-interval 60000 \
--metrics.audit.proxy.hosts 127.0.0.1:10081 --sink.hive.rolling-policy.check-interval 30000
@@ -26,7 +26,7 @@ flink环境配置完成后,可以通过浏览器访问flink的web ui,对应
注意:
-- `-c org.apache.inlong.sort.flink.Entrance` 表示main class name
+- `-c org.apache.inlong.sort.singletenant.flink.Entrance` 表示main class name
- `inlong-sort/sort-[version].jar` 为编译阶段产出的jar包