You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@seatunnel.apache.org by ki...@apache.org on 2022/02/17 02:38:27 UTC

[incubator-seatunnel] branch dev updated: [Feature][README] Modify the document address in README.md and README_zh_CN.md (#1268)

This is an automated email from the ASF dual-hosted git repository.

kirs pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-seatunnel.git


The following commit(s) were added to refs/heads/dev by this push:
     new c10310b  [Feature][README] Modify the document address in README.md and README_zh_CN.md (#1268)
c10310b is described below

commit c10310b3d56ea7117e1e1d6e881c35f4f27acb41
Author: wuchunfu <31...@qq.com>
AuthorDate: Thu Feb 17 10:38:19 2022 +0800

    [Feature][README] Modify the document address in README.md and README_zh_CN.md (#1268)
---
 README.md       | 15 +++++++++++----
 README_zh_CN.md | 17 ++++++++++-------
 2 files changed, 21 insertions(+), 11 deletions(-)

diff --git a/README.md b/README.md
index bbeb8e9..b6ee206 100644
--- a/README.md
+++ b/README.md
@@ -51,7 +51,9 @@ SeaTunnel will do its best to solve the problems that may be encountered in the
 
 ![seatunnel-workflow.svg](https://github.com/apache/incubator-seatunnel-website/blob/main/static/image/seatunnel-workflow.svg)
 
-Input[Data Source Input] -> Filter[Data Processing] -> Output[Result Output]
+```
+Source[Data Source Input] -> Transform[Data Processing] -> Sink[Result Output]
+```
 
 The data processing pipeline is constituted by multiple filters to meet a variety of data processing needs. If you are
 accustomed to SQL, you can also directly construct a data processing pipeline by SQL, which is simple and efficient.
@@ -86,9 +88,14 @@ Download address for run-directly software package :https://github.com/apache/in
 
 ## Quick start
 
-Quick start: https://interestinglab.github.io/seatunnel-docs/#/zh-cn/v1/quick-start
+**Spark**
+https://seatunnel.apache.org/docs/spark/quick-start
 
-Detailed documentation on SeaTunnel:https://interestinglab.github.io/seatunnel-docs/#/
+**Flink**
+https://seatunnel.apache.org/docs/flink/quick-start
+
+Detailed documentation on SeaTunnel
+https://seatunnel.apache.org/docs/introduction
 
 ## Application practice cases
 
@@ -127,7 +134,7 @@ volume average daily, and later writing the data to Clickhouse.
 
 Collecting various logs from business services into Apache Kafka, some of the data in Apache Kafka is consumed and extracted through Seatunnel, and then store into Clickhouse. 
 
-For more use cases, please refer to: https://interestinglab.github.io/seatunnel-docs/#/zh-cn/case_study/
+For more use cases, please refer to: https://seatunnel.apache.org/blog
 
 # Code of conduct
 
diff --git a/README_zh_CN.md b/README_zh_CN.md
index f795477..9bc049a 100644
--- a/README_zh_CN.md
+++ b/README_zh_CN.md
@@ -15,7 +15,6 @@
 ---
 
 SeaTunnel 是一个非常易用的支持海量数据实时同步的超高性能分布式数据集成平台,每天可以稳定高效同步数百亿数据,已在近百家公司生产上使用。
----
 
 ## 为什么我们需要 SeaTunnel
 
@@ -51,11 +50,11 @@ SeaTunnel 尽所能为您解决海量数据同步中可能遇到的问题:
 ![seatunnel-workflow.svg](https://github.com/apache/incubator-seatunnel-website/blob/main/static/image/seatunnel-workflow.svg)
 
 ```
-                         Input[数据源输入] -> Filter[数据处理] -> Output[结果输出]
+Source[数据源输入] -> Transform[数据处理] -> Sink[结果输出]
 ```
 
-多个 Filter 构建了数据处理的 Pipeline,满足各种各样的数据处理需求,如果您熟悉 SQL,也可以直接通过 SQL 构建数据处理的 Pipeline,简单高效。目前 seatunnel
-支持的[Filter列表](https://interestinglab.github.io/seatunnel-docs/#/zh-cn/v1/configuration/filter-plugin),
+多个 Transform 构建了数据处理的 Pipeline,满足各种各样的数据处理需求,如果您熟悉 SQL,也可以直接通过 SQL 构建数据处理的 Pipeline,简单高效。目前 seatunnel
+支持的[Transform 列表](https://seatunnel.apache.org/docs/spark/configuration/transform-plugins/transform-plugin),
 仍然在不断扩充中。您也可以开发自己的数据处理插件,整个系统是易于扩展的。
 
 ## SeaTunnel 支持的插件
@@ -90,9 +89,13 @@ Elasticsearch, File, Hdfs, Jdbc, Kafka, Druid, Mysql, S3, Stdout, 自行开发
 
 ## 快速入门
 
-快速入门:https://interestinglab.github.io/seatunnel-docs/#/zh-cn/v1/quick-start
+**Spark**
+https://seatunnel.apache.org/docs/spark/quick-start
+
+**Flink**
+https://seatunnel.apache.org/docs/flink/quick-start
 
-关于 SeaTunnel 的[详细文档](https://interestinglab.github.io/seatunnel-docs/)
+关于 SeaTunnel 的[详细文档](https://seatunnel.apache.org/docs/introduction)
 
 ## 生产应用案例
 
@@ -111,7 +114,7 @@ Elasticsearch, File, Hdfs, Jdbc, Kafka, Druid, Mysql, S3, Stdout, 自行开发
 
 * 水滴筹, 数据平台 水滴筹在 Yarn 上使用 SeaTunnel 做实时流式以及定时的离线批处理,每天处理 3~4T 的数据量,最终将数据写入 Clickhouse。
 
-更多案例参见: https://interestinglab.github.io/seatunnel-docs/#/zh-cn/v1/case_study/
+更多案例参见: https://seatunnel.apache.org/blog
 
 ## 行为准则