You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "loserwang1024 (via GitHub)" <gi...@apache.org> on 2024/03/20 07:58:46 UTC

[PR] [FLINK-34741][docs] Translate "get-started" Page for Flink CDC Chinese Documentation. [flink-cdc]

loserwang1024 opened a new pull request, #3175:
URL: https://github.com/apache/flink-cdc/pull/3175

   Translate https://github.com/apache/flink-cdc/tree/master/docs/content/docs/get-started pages into Chinese.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] [FLINK-34741][docs] Translate "get-started" Page for Flink CDC Chinese Documentation. [flink-cdc]

Posted by "lvyanquan (via GitHub)" <gi...@apache.org>.
lvyanquan commented on code in PR #3175:
URL: https://github.com/apache/flink-cdc/pull/3175#discussion_r1531698108


##########
docs/content.zh/docs/get-started/quickstart/mysql-to-starrocks.md:
##########
@@ -24,138 +24,135 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Streaming ELT from MySQL to StarRocks
+# Streaming ELT 同步 MySQL 到 StarRocks
 
-This tutorial is to show how to quickly build a Streaming ELT job from MySQL to StarRocks using Flink CDC, including the
-feature of sync all table of one database, schema change evolution and sync sharding tables into one table.  
-All exercises in this tutorial are performed in the Flink CDC CLI, and the entire process uses standard SQL syntax,
-without a single line of Java/Scala code or IDE installation.
+这篇教程将展示如何基于 Flink CDC 快速构建 MySQL 到 StarRocks 的 Streaming ELT 作业,包含整库同步、表结构变更同步和分库分表同步的功能。  
+本教程的演示都将在 Flink CDC CLI 中进行,无需一行 Java/Scala 代码,也无需安装 IDE。
 
-## Preparation
-Prepare a Linux or MacOS computer with Docker installed.
+## 准备阶段
+准备一台已经安装了 Docker 的 Linux 或者 MacOS 电脑。
 
-### Prepare Flink Standalone cluster
-1. Download [Flink 1.18.0](https://archive.apache.org/dist/flink/flink-1.18.0/flink-1.18.0-bin-scala_2.12.tgz) ,unzip and get flink-1.18.0 directory.   
-   Use the following command to navigate to the Flink directory and set FLINK_HOME to the directory where flink-1.18.0 is located.
+### 准备 Flink Standalone 集群
+1. 下载 [Flink 1.18.0](https://archive.apache.org/dist/flink/flink-1.18.0/flink-1.18.0-bin-scala_2.12.tgz) ,解压后得到 flink-1.18.0 目录。   
+   使用下面的命令跳转至 Flink 目录下,并且设置 FLINK_HOME 为 flink-1.18.0 所在目录。
 
    ```shell
    cd flink-1.18.0
    ```
 
-2. Enable checkpointing by appending the following parameters to the conf/flink-conf.yaml configuration file to perform a checkpoint every 3 seconds.
+2. 通过在 conf/flink-conf.yaml 配置文件追加下列参数开启 checkpoint,每隔 3 秒做一次 checkpoint。
 
    ```yaml
    execution.checkpointing.interval: 3000
    ```
 
-3. Start the Flink cluster using the following command.
+3. 使用下面的命令启动 Flink 集群。
 
    ```shell
    ./bin/start-cluster.sh
    ```  
 
-If successfully started, you can access the Flink Web UI at [http://localhost:8081/](http://localhost:8081/), as shown below.
+启动成功的话,可以在 [http://localhost:8081/](http://localhost:8081/) 访问到 Flink Web UI,如下所示:
 
 {{< img src="/fig/mysql-starrocks-tutorial/flink-ui.png" alt="Flink UI" >}}
 
-Executing `start-cluster.sh` multiple times can start multiple `TaskManager`s.
+多次执行 start-cluster.sh 可以拉起多个 TaskManager。
 
-### Prepare docker compose
-The following tutorial will prepare the required components using `docker-compose`.
-Create a `docker-compose.yml` file using the content provided below:
+### 准备 Docker 环境
+使用下面的内容创建一个 `docker-compose.yml` 文件:
 
    ```yaml
    version: '2.1'
    services:
-      StarRocks:
-         image: registry.starrocks.io/starrocks/allin1-ubuntu
-         ports:
-            - "8030:8030"
-            - "8040:8040"
-            - "9030:9030"
-      MySQL:
-         image: debezium/example-mysql:1.1
-         ports:
-            - "3306:3306"
-         environment:
-            - MYSQL_ROOT_PASSWORD=123456
-            - MYSQL_USER=mysqluser
-            - MYSQL_PASSWORD=mysqlpw
+     StarRocks:
+       image: registry.starrocks.io/starrocks/allin1-ubuntu
+       ports:
+         - "8030:8030"
+         - "8040:8040"
+         - "9030:9030"
+     MySQL:
+       image: debezium/example-mysql:1.1
+       ports:
+         - "3306:3306"
+       environment:
+         - MYSQL_ROOT_PASSWORD=123456
+         - MYSQL_USER=mysqluser
+         - MYSQL_PASSWORD=mysqlpw
    ```
 
-The Docker Compose should include the following services (containers):
-- MySQL: include a database named `app_db`
-- StarRocks: to store tables from MySQL
+该 Docker Compose 中包含的容器有:
+- MySQL: 包含商品信息的数据库 `app_db`
+- StarRocks: 存储从 MySQL 中根据规则映射过来的结果表
 
-To start all containers, run the following command in the directory that contains the `docker-compose.yml` file.
+在 `docker-compose.yml` 所在目录下执行下面的命令来启动本教程需要的组件:
 
    ```shell
    docker-compose up -d
    ```
 
-This command automatically starts all the containers defined in the Docker Compose configuration in a detached mode. Run docker ps to check whether these containers are running properly. You can also visit [http://localhost:8030/](http://localhost:8030/) to check whether StarRocks is running.
-#### Prepare records for MySQL
-1. Enter MySQL container
+该命令将以 detached 模式自动启动 Docker Compose 配置中定义的所有容器。你可以通过 docker ps 来观察上述的容器是否正常启动了,也可以通过访问 [http://localhost:8030/](http://localhost:8030/) 来查看 StarRocks 是否运行正常。
+#### 在 MySQL 数据库中准备数据
+1. 进入 MySQL 容器
 
    ```shell
    docker-compose exec mysql mysql -uroot -p123456
    ```
 
-2. create `app_db` database and `orders`,`products`,`shipments` tables, then insert records
+2. 创建数据库 `app_db` 和表 `orders`,`products`,`shipments`,并插入数据
 
     ```sql
-    -- create database
+    -- 创建数据库
     CREATE DATABASE app_db;
    
     USE app_db;
    
-   -- create orders table
+   -- 创建 orders 表
    CREATE TABLE `orders` (
    `id` INT NOT NULL,
    `price` DECIMAL(10,2) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `orders` (`id`, `price`) VALUES (1, 4.00);
    INSERT INTO `orders` (`id`, `price`) VALUES (2, 100.00);
    
-   -- create shipments table
+   -- 创建 shipments 表
    CREATE TABLE `shipments` (
    `id` INT NOT NULL,
    `city` VARCHAR(255) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `shipments` (`id`, `city`) VALUES (1, 'beijing');
    INSERT INTO `shipments` (`id`, `city`) VALUES (2, 'xian');
    
-   -- create products table
+   -- 创建 products 表
    CREATE TABLE `products` (
    `id` INT NOT NULL,
    `product` VARCHAR(255) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `products` (`id`, `product`) VALUES (1, 'Beer');
    INSERT INTO `products` (`id`, `product`) VALUES (2, 'Cap');
    INSERT INTO `products` (`id`, `product`) VALUES (3, 'Peanut');
     ```
-   
-## Submit job using FlinkCDC cli
-1. Download the binary compressed packages listed below and extract them to the directory ` flink cdc-3.0.0 '`:    
-   [flink-cdc-3.0.0-bin.tar.gz](https://github.org/apache/flink/flink-cdc-connectors/releases/download/release-3.0.0/flink-cdc-3.0.0-bin.tar.gz)
-   flink-cdc-3.0.0 directory will contain four directory `bin`,`lib`,`log`,`conf`.
 
-2. Download the connector package listed below and move it to the `lib` directory  
-   **Download links are available only for stable releases, SNAPSHOT dependencies need to be built based on master or release branches by yourself.**
-    - [MySQL pipeline connector 3.0.0](https://repo1.maven.org/maven2/org/apache/flink/flink-cdc-pipeline-connector-mysql/3.0.0/flink-cdc-pipeline-connector-mysql-3.0.0.jar)
-    - [StarRocks pipeline connector 3.0.0](https://repo1.maven.org/maven2/org/apache/flink/flink-cdc-pipeline-connector-starrocks/3.0.0/flink-cdc-pipeline-connector-starrocks-3.0.0.jar)
+## 通过 FlinkCDC cli 提交任务
+1. 下载下面列出的二进制压缩包,并解压得到目录 `flink-cdc-3.0.0`:    
+   [flink-cdc-3.0.0-bin.tar.gz](https://github.com/ververica/flink-cdc-connectors/releases/download/release-3.0.0/flink-cdc-3.0.0-bin.tar.gz)
+   flink-cdc-3.0.0 下会包含 bin、lib、log、conf 四个目录。
+
+2. 下载下面列出的 connector 包,并且移动到 lib 目录下  
+   **下载链接只对已发布的版本有效, SNAPSHOT 版本需要本地基于 master 或 release- 分支编译**
+   - [MySQL pipeline connector 3.0.0](https://repo1.maven.org/maven2/com/ververica/flink-cdc-pipeline-connector-mysql/3.0.0/flink-cdc-pipeline-connector-mysql-3.0.0.jar)
+   - [StarRocks pipeline connector 3.0.0](https://repo1.maven.org/maven2/com/ververica/flink-cdc-pipeline-connector-starrocks/3.0.0/flink-cdc-pipeline-connector-starrocks-3.0.0.jar)

Review Comment:
   Invalid link too.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] [FLINK-34741][docs] Translate "get-started" Page for Flink CDC Chinese Documentation. [flink-cdc]

Posted by "loserwang1024 (via GitHub)" <gi...@apache.org>.
loserwang1024 commented on code in PR #3175:
URL: https://github.com/apache/flink-cdc/pull/3175#discussion_r1531706139


##########
docs/content.zh/docs/get-started/quickstart/mysql-to-starrocks.md:
##########
@@ -24,138 +24,135 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Streaming ELT from MySQL to StarRocks
+# Streaming ELT 同步 MySQL 到 StarRocks
 
-This tutorial is to show how to quickly build a Streaming ELT job from MySQL to StarRocks using Flink CDC, including the
-feature of sync all table of one database, schema change evolution and sync sharding tables into one table.  
-All exercises in this tutorial are performed in the Flink CDC CLI, and the entire process uses standard SQL syntax,
-without a single line of Java/Scala code or IDE installation.
+这篇教程将展示如何基于 Flink CDC 快速构建 MySQL 到 StarRocks 的 Streaming ELT 作业,包含整库同步、表结构变更同步和分库分表同步的功能。  
+本教程的演示都将在 Flink CDC CLI 中进行,无需一行 Java/Scala 代码,也无需安装 IDE。
 
-## Preparation
-Prepare a Linux or MacOS computer with Docker installed.
+## 准备阶段
+准备一台已经安装了 Docker 的 Linux 或者 MacOS 电脑。
 
-### Prepare Flink Standalone cluster
-1. Download [Flink 1.18.0](https://archive.apache.org/dist/flink/flink-1.18.0/flink-1.18.0-bin-scala_2.12.tgz) ,unzip and get flink-1.18.0 directory.   
-   Use the following command to navigate to the Flink directory and set FLINK_HOME to the directory where flink-1.18.0 is located.
+### 准备 Flink Standalone 集群
+1. 下载 [Flink 1.18.0](https://archive.apache.org/dist/flink/flink-1.18.0/flink-1.18.0-bin-scala_2.12.tgz) ,解压后得到 flink-1.18.0 目录。   
+   使用下面的命令跳转至 Flink 目录下,并且设置 FLINK_HOME 为 flink-1.18.0 所在目录。
 
    ```shell
    cd flink-1.18.0
    ```
 
-2. Enable checkpointing by appending the following parameters to the conf/flink-conf.yaml configuration file to perform a checkpoint every 3 seconds.
+2. 通过在 conf/flink-conf.yaml 配置文件追加下列参数开启 checkpoint,每隔 3 秒做一次 checkpoint。
 
    ```yaml
    execution.checkpointing.interval: 3000
    ```
 
-3. Start the Flink cluster using the following command.
+3. 使用下面的命令启动 Flink 集群。
 
    ```shell
    ./bin/start-cluster.sh
    ```  
 
-If successfully started, you can access the Flink Web UI at [http://localhost:8081/](http://localhost:8081/), as shown below.
+启动成功的话,可以在 [http://localhost:8081/](http://localhost:8081/) 访问到 Flink Web UI,如下所示:
 
 {{< img src="/fig/mysql-starrocks-tutorial/flink-ui.png" alt="Flink UI" >}}
 
-Executing `start-cluster.sh` multiple times can start multiple `TaskManager`s.
+多次执行 start-cluster.sh 可以拉起多个 TaskManager。
 
-### Prepare docker compose
-The following tutorial will prepare the required components using `docker-compose`.
-Create a `docker-compose.yml` file using the content provided below:
+### 准备 Docker 环境
+使用下面的内容创建一个 `docker-compose.yml` 文件:
 
    ```yaml
    version: '2.1'
    services:
-      StarRocks:
-         image: registry.starrocks.io/starrocks/allin1-ubuntu
-         ports:
-            - "8030:8030"
-            - "8040:8040"
-            - "9030:9030"
-      MySQL:
-         image: debezium/example-mysql:1.1
-         ports:
-            - "3306:3306"
-         environment:
-            - MYSQL_ROOT_PASSWORD=123456
-            - MYSQL_USER=mysqluser
-            - MYSQL_PASSWORD=mysqlpw
+     StarRocks:
+       image: registry.starrocks.io/starrocks/allin1-ubuntu
+       ports:
+         - "8030:8030"
+         - "8040:8040"
+         - "9030:9030"
+     MySQL:
+       image: debezium/example-mysql:1.1
+       ports:
+         - "3306:3306"
+       environment:
+         - MYSQL_ROOT_PASSWORD=123456
+         - MYSQL_USER=mysqluser
+         - MYSQL_PASSWORD=mysqlpw
    ```
 
-The Docker Compose should include the following services (containers):
-- MySQL: include a database named `app_db`
-- StarRocks: to store tables from MySQL
+该 Docker Compose 中包含的容器有:
+- MySQL: 包含商品信息的数据库 `app_db`
+- StarRocks: 存储从 MySQL 中根据规则映射过来的结果表
 
-To start all containers, run the following command in the directory that contains the `docker-compose.yml` file.
+在 `docker-compose.yml` 所在目录下执行下面的命令来启动本教程需要的组件:
 
    ```shell
    docker-compose up -d
    ```
 
-This command automatically starts all the containers defined in the Docker Compose configuration in a detached mode. Run docker ps to check whether these containers are running properly. You can also visit [http://localhost:8030/](http://localhost:8030/) to check whether StarRocks is running.
-#### Prepare records for MySQL
-1. Enter MySQL container
+该命令将以 detached 模式自动启动 Docker Compose 配置中定义的所有容器。你可以通过 docker ps 来观察上述的容器是否正常启动了,也可以通过访问 [http://localhost:8030/](http://localhost:8030/) 来查看 StarRocks 是否运行正常。
+#### 在 MySQL 数据库中准备数据
+1. 进入 MySQL 容器
 
    ```shell
    docker-compose exec mysql mysql -uroot -p123456
    ```
 
-2. create `app_db` database and `orders`,`products`,`shipments` tables, then insert records
+2. 创建数据库 `app_db` 和表 `orders`,`products`,`shipments`,并插入数据
 
     ```sql
-    -- create database
+    -- 创建数据库
     CREATE DATABASE app_db;
    
     USE app_db;
    
-   -- create orders table
+   -- 创建 orders 表
    CREATE TABLE `orders` (
    `id` INT NOT NULL,
    `price` DECIMAL(10,2) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `orders` (`id`, `price`) VALUES (1, 4.00);
    INSERT INTO `orders` (`id`, `price`) VALUES (2, 100.00);
    
-   -- create shipments table
+   -- 创建 shipments 表
    CREATE TABLE `shipments` (
    `id` INT NOT NULL,
    `city` VARCHAR(255) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `shipments` (`id`, `city`) VALUES (1, 'beijing');
    INSERT INTO `shipments` (`id`, `city`) VALUES (2, 'xian');
    
-   -- create products table
+   -- 创建 products 表
    CREATE TABLE `products` (
    `id` INT NOT NULL,
    `product` VARCHAR(255) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `products` (`id`, `product`) VALUES (1, 'Beer');
    INSERT INTO `products` (`id`, `product`) VALUES (2, 'Cap');
    INSERT INTO `products` (`id`, `product`) VALUES (3, 'Peanut');
     ```
-   
-## Submit job using FlinkCDC cli
-1. Download the binary compressed packages listed below and extract them to the directory ` flink cdc-3.0.0 '`:    
-   [flink-cdc-3.0.0-bin.tar.gz](https://github.org/apache/flink/flink-cdc-connectors/releases/download/release-3.0.0/flink-cdc-3.0.0-bin.tar.gz)
-   flink-cdc-3.0.0 directory will contain four directory `bin`,`lib`,`log`,`conf`.
 
-2. Download the connector package listed below and move it to the `lib` directory  
-   **Download links are available only for stable releases, SNAPSHOT dependencies need to be built based on master or release branches by yourself.**
-    - [MySQL pipeline connector 3.0.0](https://repo1.maven.org/maven2/org/apache/flink/flink-cdc-pipeline-connector-mysql/3.0.0/flink-cdc-pipeline-connector-mysql-3.0.0.jar)
-    - [StarRocks pipeline connector 3.0.0](https://repo1.maven.org/maven2/org/apache/flink/flink-cdc-pipeline-connector-starrocks/3.0.0/flink-cdc-pipeline-connector-starrocks-3.0.0.jar)
+## 通过 FlinkCDC cli 提交任务
+1. 下载下面列出的二进制压缩包,并解压得到目录 `flink-cdc-3.0.0`:    
+   [flink-cdc-3.0.0-bin.tar.gz](https://github.com/ververica/flink-cdc-connectors/releases/download/release-3.0.0/flink-cdc-3.0.0-bin.tar.gz)
+   flink-cdc-3.0.0 下会包含 bin、lib、log、conf 四个目录。
+
+2. 下载下面列出的 connector 包,并且移动到 lib 目录下  
+   **下载链接只对已发布的版本有效, SNAPSHOT 版本需要本地基于 master 或 release- 分支编译**
+   - [MySQL pipeline connector 3.0.0](https://repo1.maven.org/maven2/com/ververica/flink-cdc-pipeline-connector-mysql/3.0.0/flink-cdc-pipeline-connector-mysql-3.0.0.jar)
+   - [StarRocks pipeline connector 3.0.0](https://repo1.maven.org/maven2/com/ververica/flink-cdc-pipeline-connector-starrocks/3.0.0/flink-cdc-pipeline-connector-starrocks-3.0.0.jar)

Review Comment:
   fix it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] [FLINK-34741][docs] Translate "get-started" Page for Flink CDC Chinese Documentation. [flink-cdc]

Posted by "loserwang1024 (via GitHub)" <gi...@apache.org>.
loserwang1024 commented on PR #3175:
URL: https://github.com/apache/flink-cdc/pull/3175#issuecomment-2008981088

   @lvyanquan , @PatrickRen , @leonardBang , CC


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] [FLINK-34741][docs] Translate "get-started" Page for Flink CDC Chinese Documentation. [flink-cdc]

Posted by "loserwang1024 (via GitHub)" <gi...@apache.org>.
loserwang1024 commented on code in PR #3175:
URL: https://github.com/apache/flink-cdc/pull/3175#discussion_r1531705762


##########
docs/content.zh/docs/get-started/quickstart/mysql-to-doris.md:
##########
@@ -104,95 +101,96 @@ Then `exit` exits and creates the Doris Docker cluster.
          - MYSQL_PASSWORD=mysqlpw
    ```
 
-The Docker Compose should include the following services (containers):
-- MySQL: include a database named `app_db` 
-- Doris: to store tables from MySQL
+该 Docker Compose 中包含的容器有:
+- MySQL: 包含商品信息的数据库 `app_db` 
+- Doris: 存储从 MySQL 中根据规则映射过来的结果表
 
-To start all containers, run the following command in the directory that contains the `docker-compose.yml` file.
+在 `docker-compose.yml` 所在目录下执行下面的命令来启动本教程需要的组件:
 
    ```shell
    docker-compose up -d
    ```
 
-This command automatically starts all the containers defined in the Docker Compose configuration in a detached mode. Run docker ps to check whether these containers are running properly. You can also visit [http://localhost:8030/](http://localhost:8030/) to check whether Doris is running.
-#### Prepare records for MySQL
-1. Enter MySQL container
+该命令将以 detached 模式自动启动 Docker Compose 配置中定义的所有容器。你可以通过 docker ps 来观察上述的容器是否正常启动了,也可以通过访问[http://localhost:8030/](http://localhost:8030/) 来查看 Doris 是否运行正常。
+
+#### 在 MySQL 数据库中准备数据
+1. 进入 MySQL 容器
 
    ```shell
    docker-compose exec mysql mysql -uroot -p123456
    ```
 
-2. create `app_db` database and `orders`,`products`,`shipments` tables, then insert records
+2. 创建数据库 `app_db` 和表 `orders`,`products`,`shipments`,并插入数据
 
     ```sql
-    -- create database
+    -- 创建数据库
     CREATE DATABASE app_db;
    
     USE app_db;
    
-   -- create orders table
+   -- 创建 orders 表
    CREATE TABLE `orders` (
    `id` INT NOT NULL,
    `price` DECIMAL(10,2) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `orders` (`id`, `price`) VALUES (1, 4.00);
    INSERT INTO `orders` (`id`, `price`) VALUES (2, 100.00);
    
-   -- create shipments table
+   -- 创建 shipments 表
    CREATE TABLE `shipments` (
    `id` INT NOT NULL,
    `city` VARCHAR(255) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `shipments` (`id`, `city`) VALUES (1, 'beijing');
    INSERT INTO `shipments` (`id`, `city`) VALUES (2, 'xian');
    
-   -- create products table
+   -- 创建 products 表
    CREATE TABLE `products` (
    `id` INT NOT NULL,
    `product` VARCHAR(255) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `products` (`id`, `product`) VALUES (1, 'Beer');
    INSERT INTO `products` (`id`, `product`) VALUES (2, 'Cap');
    INSERT INTO `products` (`id`, `product`) VALUES (3, 'Peanut');
     ```
 
 #### Create database in Doris
-`Doris` connector currently does not support automatic database creation and needs to first create a database corresponding to the write table.
-1. Enter Doris Web UI。  
+`Doris` 暂时不支持自动创建数据库,需要先创建写入表对应的数据库。
+1. 进入 Doris Web UI。  
    [http://localhost:8030/](http://localhost:8030/)  
-   The default username is `root`, and the default password is empty.
+   默认的用户名为 `root`,默认密码为空。
 
    {{< img src="/fig/mysql-doris-tutorial/doris-ui.png" alt="Doris UI" >}}
 
-2. Create `app_db` database through Web UI.
+2. 通过 Web UI 创建 `app_db` 数据库
 
     ```sql
    create database app_db;
     ```
 
    {{< img src="/fig/mysql-doris-tutorial/doris-create-table.png" alt="Doris create table" >}}
 
-## Submit job using FlinkCDC cli
-1. Download the binary compressed packages listed below and extract them to the directory ` flink cdc-3.0.0 '`:    
-   [flink-cdc-3.0.0-bin.tar.gz](https://github.org/apache/flink/flink-cdc-connectors/releases/download/release-3.0.0/flink-cdc-3.0.0-bin.tar.gz)
-   flink-cdc-3.0.0 directory will contain four directory `bin`,`lib`,`log`,`conf`.
+## 通过 FlinkCDC cli 提交任务
+1. 下载下面列出的二进制压缩包,并解压得到目录  ` flink cdc-3.0.0 '`:    
+   [flink-cdc-3.0.0-bin.tar.gz](https://github.org/apache/flink/flink-cdc-connectors/releases/download/release-3.0.0/flink-cdc-3.0.0-bin.tar.gz).

Review Comment:
   Thanks, fix it.



##########
docs/content.zh/docs/get-started/quickstart/mysql-to-doris.md:
##########
@@ -104,95 +101,96 @@ Then `exit` exits and creates the Doris Docker cluster.
          - MYSQL_PASSWORD=mysqlpw
    ```
 
-The Docker Compose should include the following services (containers):
-- MySQL: include a database named `app_db` 
-- Doris: to store tables from MySQL
+该 Docker Compose 中包含的容器有:
+- MySQL: 包含商品信息的数据库 `app_db` 
+- Doris: 存储从 MySQL 中根据规则映射过来的结果表
 
-To start all containers, run the following command in the directory that contains the `docker-compose.yml` file.
+在 `docker-compose.yml` 所在目录下执行下面的命令来启动本教程需要的组件:
 
    ```shell
    docker-compose up -d
    ```
 
-This command automatically starts all the containers defined in the Docker Compose configuration in a detached mode. Run docker ps to check whether these containers are running properly. You can also visit [http://localhost:8030/](http://localhost:8030/) to check whether Doris is running.
-#### Prepare records for MySQL
-1. Enter MySQL container
+该命令将以 detached 模式自动启动 Docker Compose 配置中定义的所有容器。你可以通过 docker ps 来观察上述的容器是否正常启动了,也可以通过访问[http://localhost:8030/](http://localhost:8030/) 来查看 Doris 是否运行正常。
+
+#### 在 MySQL 数据库中准备数据
+1. 进入 MySQL 容器
 
    ```shell
    docker-compose exec mysql mysql -uroot -p123456
    ```
 
-2. create `app_db` database and `orders`,`products`,`shipments` tables, then insert records
+2. 创建数据库 `app_db` 和表 `orders`,`products`,`shipments`,并插入数据
 
     ```sql
-    -- create database
+    -- 创建数据库
     CREATE DATABASE app_db;
    
     USE app_db;
    
-   -- create orders table
+   -- 创建 orders 表
    CREATE TABLE `orders` (
    `id` INT NOT NULL,
    `price` DECIMAL(10,2) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `orders` (`id`, `price`) VALUES (1, 4.00);
    INSERT INTO `orders` (`id`, `price`) VALUES (2, 100.00);
    
-   -- create shipments table
+   -- 创建 shipments 表
    CREATE TABLE `shipments` (
    `id` INT NOT NULL,
    `city` VARCHAR(255) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `shipments` (`id`, `city`) VALUES (1, 'beijing');
    INSERT INTO `shipments` (`id`, `city`) VALUES (2, 'xian');
    
-   -- create products table
+   -- 创建 products 表
    CREATE TABLE `products` (
    `id` INT NOT NULL,
    `product` VARCHAR(255) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `products` (`id`, `product`) VALUES (1, 'Beer');
    INSERT INTO `products` (`id`, `product`) VALUES (2, 'Cap');
    INSERT INTO `products` (`id`, `product`) VALUES (3, 'Peanut');
     ```
 
 #### Create database in Doris
-`Doris` connector currently does not support automatic database creation and needs to first create a database corresponding to the write table.
-1. Enter Doris Web UI。  
+`Doris` 暂时不支持自动创建数据库,需要先创建写入表对应的数据库。
+1. 进入 Doris Web UI。  
    [http://localhost:8030/](http://localhost:8030/)  
-   The default username is `root`, and the default password is empty.
+   默认的用户名为 `root`,默认密码为空。
 
    {{< img src="/fig/mysql-doris-tutorial/doris-ui.png" alt="Doris UI" >}}
 
-2. Create `app_db` database through Web UI.
+2. 通过 Web UI 创建 `app_db` 数据库
 
     ```sql
    create database app_db;
     ```
 
    {{< img src="/fig/mysql-doris-tutorial/doris-create-table.png" alt="Doris create table" >}}
 
-## Submit job using FlinkCDC cli
-1. Download the binary compressed packages listed below and extract them to the directory ` flink cdc-3.0.0 '`:    
-   [flink-cdc-3.0.0-bin.tar.gz](https://github.org/apache/flink/flink-cdc-connectors/releases/download/release-3.0.0/flink-cdc-3.0.0-bin.tar.gz)
-   flink-cdc-3.0.0 directory will contain four directory `bin`,`lib`,`log`,`conf`.
+## 通过 FlinkCDC cli 提交任务
+1. 下载下面列出的二进制压缩包,并解压得到目录  ` flink cdc-3.0.0 '`:    
+   [flink-cdc-3.0.0-bin.tar.gz](https://github.org/apache/flink/flink-cdc-connectors/releases/download/release-3.0.0/flink-cdc-3.0.0-bin.tar.gz).
+   flink-cdc-3.0.0 下会包含 `bin`、`lib`、`log`、`conf` 四个目录。
 
-2. Download the connector package listed below and move it to the `lib` directory  
-   **Download links are available only for stable releases, SNAPSHOT dependencies need to be built based on master or release branches by yourself.**
+2. 下载下面列出的 connector 包,并且移动到 `lib` 目录下
+   **下载链接只对已发布的版本有效, SNAPSHOT 版本需要本地基于 master 或 release- 分支编译.**
     - [MySQL pipeline connector 3.0.0](https://repo1.maven.org/maven2/org/apache/flink/flink-cdc-pipeline-connector-mysql/3.0.0/flink-cdc-pipeline-connector-mysql-3.0.0.jar)
     - [Apache Doris pipeline connector 3.0.0](https://repo1.maven.org/maven2/org/apache/flink/flink-cdc-pipeline-connector-doris/3.0.0/flink-cdc-pipeline-connector-doris-3.0.0.jar)

Review Comment:
   fix it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] [FLINK-34741][docs] Translate "get-started" Page for Flink CDC Chinese Documentation. [flink-cdc]

Posted by "laglangyue (via GitHub)" <gi...@apache.org>.
laglangyue commented on code in PR #3175:
URL: https://github.com/apache/flink-cdc/pull/3175#discussion_r1531664541


##########
docs/content.zh/docs/get-started/introduction.md:
##########
@@ -71,27 +66,25 @@ pipeline:
   parallelism: 2
 ```
 
-By submitting the YAML file with `flink-cdc.sh`, a Flink job will be compiled
-and deployed to a designated Flink cluster. Please refer to [Core Concept]({{<
-ref "docs/core-concept/data-pipeline" >}}) to get full documentation of all
-supported functionalities of a pipeline.
+通过使用 `flink-cdc.sh` 提交 YAML 文件,一个 Flink 作业将会被编译并部署到指定的 Flink 集群。
+By submitting the YAML file with `flink-cdc.sh`, 请参考 [核心概念]({{<
+ref "docs/core-concept/data-pipeline" >}}) 以获取 Pipeline 支持的所有功能的完整文档说明。

Review Comment:
   Here are some errors with some not translated words.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] [FLINK-34741][docs] Translate "get-started" Page for Flink CDC Chinese Documentation. [flink-cdc]

Posted by "laglangyue (via GitHub)" <gi...@apache.org>.
laglangyue commented on code in PR #3175:
URL: https://github.com/apache/flink-cdc/pull/3175#discussion_r1531705523


##########
docs/content.zh/docs/get-started/introduction.md:
##########
@@ -24,28 +24,23 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Welcome to Flink CDC 🎉
+# 欢迎使用 Flink CDC 🎉
 
-Flink CDC is a streaming data integration tool that aims to provide users with
-a more robust API. It allows users to describe their ETL pipeline logic via YAML
-elegantly and help users automatically generating customized Flink operators and
-submitting job. Flink CDC prioritizes optimizing the task submission process and
-offers enhanced functionalities such as schema evolution, data transformation,
-full database synchronization and exactly-once semantic.
+Flink CDC 是一个基于流的数据集成工具,旨在为用户提供一套功能更加全面的编程接口(API)。
+该工具使得用户能够以 YAML 配置文件的形式,优雅地定义其 ETL(Extract, Transform, Load)流程,并协助用户自动化生成定制化的 Flink 算子并且提交 Flink 作业。
+Flink CDC 在任务提交过程中进行了优化,并且增加了一些高级特性,如表结构变更自动同步(Schema Evolution)、数据转换(Data Transformation)、全量数据库同步(Full Database Synchronization)以及 Exactly-once 语义。

Review Comment:
   just some suggestions
   全量数据库同步 -> 整库同步
   Exactly-once -> 精确一次(Exactly-once)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] [FLINK-34741][docs] Translate "get-started" Page for Flink CDC Chinese Documentation. [flink-cdc]

Posted by "lvyanquan (via GitHub)" <gi...@apache.org>.
lvyanquan commented on code in PR #3175:
URL: https://github.com/apache/flink-cdc/pull/3175#discussion_r1531693834


##########
docs/content.zh/docs/get-started/quickstart/mysql-to-doris.md:
##########
@@ -104,95 +101,96 @@ Then `exit` exits and creates the Doris Docker cluster.
          - MYSQL_PASSWORD=mysqlpw
    ```
 
-The Docker Compose should include the following services (containers):
-- MySQL: include a database named `app_db` 
-- Doris: to store tables from MySQL
+该 Docker Compose 中包含的容器有:
+- MySQL: 包含商品信息的数据库 `app_db` 
+- Doris: 存储从 MySQL 中根据规则映射过来的结果表
 
-To start all containers, run the following command in the directory that contains the `docker-compose.yml` file.
+在 `docker-compose.yml` 所在目录下执行下面的命令来启动本教程需要的组件:
 
    ```shell
    docker-compose up -d
    ```
 
-This command automatically starts all the containers defined in the Docker Compose configuration in a detached mode. Run docker ps to check whether these containers are running properly. You can also visit [http://localhost:8030/](http://localhost:8030/) to check whether Doris is running.
-#### Prepare records for MySQL
-1. Enter MySQL container
+该命令将以 detached 模式自动启动 Docker Compose 配置中定义的所有容器。你可以通过 docker ps 来观察上述的容器是否正常启动了,也可以通过访问[http://localhost:8030/](http://localhost:8030/) 来查看 Doris 是否运行正常。
+
+#### 在 MySQL 数据库中准备数据
+1. 进入 MySQL 容器
 
    ```shell
    docker-compose exec mysql mysql -uroot -p123456
    ```
 
-2. create `app_db` database and `orders`,`products`,`shipments` tables, then insert records
+2. 创建数据库 `app_db` 和表 `orders`,`products`,`shipments`,并插入数据
 
     ```sql
-    -- create database
+    -- 创建数据库
     CREATE DATABASE app_db;
    
     USE app_db;
    
-   -- create orders table
+   -- 创建 orders 表
    CREATE TABLE `orders` (
    `id` INT NOT NULL,
    `price` DECIMAL(10,2) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `orders` (`id`, `price`) VALUES (1, 4.00);
    INSERT INTO `orders` (`id`, `price`) VALUES (2, 100.00);
    
-   -- create shipments table
+   -- 创建 shipments 表
    CREATE TABLE `shipments` (
    `id` INT NOT NULL,
    `city` VARCHAR(255) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `shipments` (`id`, `city`) VALUES (1, 'beijing');
    INSERT INTO `shipments` (`id`, `city`) VALUES (2, 'xian');
    
-   -- create products table
+   -- 创建 products 表
    CREATE TABLE `products` (
    `id` INT NOT NULL,
    `product` VARCHAR(255) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `products` (`id`, `product`) VALUES (1, 'Beer');
    INSERT INTO `products` (`id`, `product`) VALUES (2, 'Cap');
    INSERT INTO `products` (`id`, `product`) VALUES (3, 'Peanut');
     ```
 
 #### Create database in Doris
-`Doris` connector currently does not support automatic database creation and needs to first create a database corresponding to the write table.
-1. Enter Doris Web UI。  
+`Doris` 暂时不支持自动创建数据库,需要先创建写入表对应的数据库。
+1. 进入 Doris Web UI。  
    [http://localhost:8030/](http://localhost:8030/)  
-   The default username is `root`, and the default password is empty.
+   默认的用户名为 `root`,默认密码为空。
 
    {{< img src="/fig/mysql-doris-tutorial/doris-ui.png" alt="Doris UI" >}}
 
-2. Create `app_db` database through Web UI.
+2. 通过 Web UI 创建 `app_db` 数据库
 
     ```sql
    create database app_db;
     ```
 
    {{< img src="/fig/mysql-doris-tutorial/doris-create-table.png" alt="Doris create table" >}}
 
-## Submit job using FlinkCDC cli
-1. Download the binary compressed packages listed below and extract them to the directory ` flink cdc-3.0.0 '`:    
-   [flink-cdc-3.0.0-bin.tar.gz](https://github.org/apache/flink/flink-cdc-connectors/releases/download/release-3.0.0/flink-cdc-3.0.0-bin.tar.gz)
-   flink-cdc-3.0.0 directory will contain four directory `bin`,`lib`,`log`,`conf`.
+## 通过 FlinkCDC cli 提交任务
+1. 下载下面列出的二进制压缩包,并解压得到目录  ` flink cdc-3.0.0 '`:    
+   [flink-cdc-3.0.0-bin.tar.gz](https://github.org/apache/flink/flink-cdc-connectors/releases/download/release-3.0.0/flink-cdc-3.0.0-bin.tar.gz).
+   flink-cdc-3.0.0 下会包含 `bin`、`lib`、`log`、`conf` 四个目录。
 
-2. Download the connector package listed below and move it to the `lib` directory  
-   **Download links are available only for stable releases, SNAPSHOT dependencies need to be built based on master or release branches by yourself.**
+2. 下载下面列出的 connector 包,并且移动到 `lib` 目录下
+   **下载链接只对已发布的版本有效, SNAPSHOT 版本需要本地基于 master 或 release- 分支编译.**
     - [MySQL pipeline connector 3.0.0](https://repo1.maven.org/maven2/org/apache/flink/flink-cdc-pipeline-connector-mysql/3.0.0/flink-cdc-pipeline-connector-mysql-3.0.0.jar)
     - [Apache Doris pipeline connector 3.0.0](https://repo1.maven.org/maven2/org/apache/flink/flink-cdc-pipeline-connector-doris/3.0.0/flink-cdc-pipeline-connector-doris-3.0.0.jar)

Review Comment:
   this download link is invalid too.



##########
docs/content.zh/docs/get-started/quickstart/mysql-to-doris.md:
##########
@@ -104,95 +101,96 @@ Then `exit` exits and creates the Doris Docker cluster.
          - MYSQL_PASSWORD=mysqlpw
    ```
 
-The Docker Compose should include the following services (containers):
-- MySQL: include a database named `app_db` 
-- Doris: to store tables from MySQL
+该 Docker Compose 中包含的容器有:
+- MySQL: 包含商品信息的数据库 `app_db` 
+- Doris: 存储从 MySQL 中根据规则映射过来的结果表
 
-To start all containers, run the following command in the directory that contains the `docker-compose.yml` file.
+在 `docker-compose.yml` 所在目录下执行下面的命令来启动本教程需要的组件:
 
    ```shell
    docker-compose up -d
    ```
 
-This command automatically starts all the containers defined in the Docker Compose configuration in a detached mode. Run docker ps to check whether these containers are running properly. You can also visit [http://localhost:8030/](http://localhost:8030/) to check whether Doris is running.
-#### Prepare records for MySQL
-1. Enter MySQL container
+该命令将以 detached 模式自动启动 Docker Compose 配置中定义的所有容器。你可以通过 docker ps 来观察上述的容器是否正常启动了,也可以通过访问[http://localhost:8030/](http://localhost:8030/) 来查看 Doris 是否运行正常。
+
+#### 在 MySQL 数据库中准备数据
+1. 进入 MySQL 容器
 
    ```shell
    docker-compose exec mysql mysql -uroot -p123456
    ```
 
-2. create `app_db` database and `orders`,`products`,`shipments` tables, then insert records
+2. 创建数据库 `app_db` 和表 `orders`,`products`,`shipments`,并插入数据
 
     ```sql
-    -- create database
+    -- 创建数据库
     CREATE DATABASE app_db;
    
     USE app_db;
    
-   -- create orders table
+   -- 创建 orders 表
    CREATE TABLE `orders` (
    `id` INT NOT NULL,
    `price` DECIMAL(10,2) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `orders` (`id`, `price`) VALUES (1, 4.00);
    INSERT INTO `orders` (`id`, `price`) VALUES (2, 100.00);
    
-   -- create shipments table
+   -- 创建 shipments 表
    CREATE TABLE `shipments` (
    `id` INT NOT NULL,
    `city` VARCHAR(255) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `shipments` (`id`, `city`) VALUES (1, 'beijing');
    INSERT INTO `shipments` (`id`, `city`) VALUES (2, 'xian');
    
-   -- create products table
+   -- 创建 products 表
    CREATE TABLE `products` (
    `id` INT NOT NULL,
    `product` VARCHAR(255) NOT NULL,
    PRIMARY KEY (`id`)
    );
    
-   -- insert records
+   -- 插入数据
    INSERT INTO `products` (`id`, `product`) VALUES (1, 'Beer');
    INSERT INTO `products` (`id`, `product`) VALUES (2, 'Cap');
    INSERT INTO `products` (`id`, `product`) VALUES (3, 'Peanut');
     ```
 
 #### Create database in Doris
-`Doris` connector currently does not support automatic database creation and needs to first create a database corresponding to the write table.
-1. Enter Doris Web UI。  
+`Doris` 暂时不支持自动创建数据库,需要先创建写入表对应的数据库。
+1. 进入 Doris Web UI。  
    [http://localhost:8030/](http://localhost:8030/)  
-   The default username is `root`, and the default password is empty.
+   默认的用户名为 `root`,默认密码为空。
 
    {{< img src="/fig/mysql-doris-tutorial/doris-ui.png" alt="Doris UI" >}}
 
-2. Create `app_db` database through Web UI.
+2. 通过 Web UI 创建 `app_db` 数据库
 
     ```sql
    create database app_db;
     ```
 
    {{< img src="/fig/mysql-doris-tutorial/doris-create-table.png" alt="Doris create table" >}}
 
-## Submit job using FlinkCDC cli
-1. Download the binary compressed packages listed below and extract them to the directory ` flink cdc-3.0.0 '`:    
-   [flink-cdc-3.0.0-bin.tar.gz](https://github.org/apache/flink/flink-cdc-connectors/releases/download/release-3.0.0/flink-cdc-3.0.0-bin.tar.gz)
-   flink-cdc-3.0.0 directory will contain four directory `bin`,`lib`,`log`,`conf`.
+## 通过 FlinkCDC cli 提交任务
+1. 下载下面列出的二进制压缩包,并解压得到目录  ` flink cdc-3.0.0 '`:    
+   [flink-cdc-3.0.0-bin.tar.gz](https://github.org/apache/flink/flink-cdc-connectors/releases/download/release-3.0.0/flink-cdc-3.0.0-bin.tar.gz).

Review Comment:
   I found that this download link is invalid, can you update and fix it in English version?
   The correct address is:
   https://github.com/ververica/flink-cdc-connectors/releases/download/release-3.0.0/flink-cdc-3.0.0-bin.tar.gz



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] [FLINK-34741][docs] Translate "get-started" Page for Flink CDC Chinese Documentation. [flink-cdc]

Posted by "loserwang1024 (via GitHub)" <gi...@apache.org>.
loserwang1024 commented on code in PR #3175:
URL: https://github.com/apache/flink-cdc/pull/3175#discussion_r1531680200


##########
docs/content.zh/docs/get-started/introduction.md:
##########
@@ -71,27 +66,25 @@ pipeline:
   parallelism: 2
 ```
 
-By submitting the YAML file with `flink-cdc.sh`, a Flink job will be compiled
-and deployed to a designated Flink cluster. Please refer to [Core Concept]({{<
-ref "docs/core-concept/data-pipeline" >}}) to get full documentation of all
-supported functionalities of a pipeline.
+通过使用 `flink-cdc.sh` 提交 YAML 文件,一个 Flink 作业将会被编译并部署到指定的 Flink 集群。
+By submitting the YAML file with `flink-cdc.sh`, 请参考 [核心概念]({{<
+ref "docs/core-concept/data-pipeline" >}}) 以获取 Pipeline 支持的所有功能的完整文档说明。

Review Comment:
   Thanks, I forget to delete this. Done it now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] [FLINK-34741][docs] Translate "get-started" Page for Flink CDC Chinese Documentation. [flink-cdc]

Posted by "loserwang1024 (via GitHub)" <gi...@apache.org>.
loserwang1024 commented on code in PR #3175:
URL: https://github.com/apache/flink-cdc/pull/3175#discussion_r1531713344


##########
docs/content.zh/docs/get-started/introduction.md:
##########
@@ -24,28 +24,23 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Welcome to Flink CDC 🎉
+# 欢迎使用 Flink CDC 🎉
 
-Flink CDC is a streaming data integration tool that aims to provide users with
-a more robust API. It allows users to describe their ETL pipeline logic via YAML
-elegantly and help users automatically generating customized Flink operators and
-submitting job. Flink CDC prioritizes optimizing the task submission process and
-offers enhanced functionalities such as schema evolution, data transformation,
-full database synchronization and exactly-once semantic.
+Flink CDC 是一个基于流的数据集成工具,旨在为用户提供一套功能更加全面的编程接口(API)。
+该工具使得用户能够以 YAML 配置文件的形式,优雅地定义其 ETL(Extract, Transform, Load)流程,并协助用户自动化生成定制化的 Flink 算子并且提交 Flink 作业。
+Flink CDC 在任务提交过程中进行了优化,并且增加了一些高级特性,如表结构变更自动同步(Schema Evolution)、数据转换(Data Transformation)、全量数据库同步(Full Database Synchronization)以及 Exactly-once 语义。

Review Comment:
   It seems better, done it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [PR] [FLINK-34741][docs] Translate "get-started" Page for Flink CDC Chinese Documentation. [flink-cdc]

Posted by "leonardBang (via GitHub)" <gi...@apache.org>.
leonardBang merged PR #3175:
URL: https://github.com/apache/flink-cdc/pull/3175


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org