You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@inlong.apache.org by do...@apache.org on 2022/06/22 05:13:27 UTC

[inlong-website] branch master updated: [INLONG-396][Release] Add blog for the 1.2.0 release (#441)

This is an automated email from the ASF dual-hosted git repository.

dockerzhang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/inlong-website.git


The following commit(s) were added to refs/heads/master by this push:
     new eae2f5d06 [INLONG-396][Release] Add blog for the 1.2.0 release (#441)
eae2f5d06 is described below

commit eae2f5d06fc7ffa393a4972a7fd8ac2d8ad2c196
Author: healzhou <he...@gmail.com>
AuthorDate: Wed Jun 22 13:13:22 2022 +0800

    [INLONG-396][Release] Add blog for the 1.2.0 release (#441)
---
 ...long-0.11.0.md => 2021-11-10-release-0.11.0.md} |   1 -
 ...long-0.12.0.md => 2022-01-04-release-0.12.0.md} |   6 +-
 ...inlong-1.1.0.md => 2022-04-25-release-1.1.0.md} |   1 -
 ...ort_ETL_en.md => 2022-06-16-inlong-sort-etl.md} | 118 +++++---------
 blog/2022-06-22-release-1.2.0.md                   | 108 +++++++++++++
 ...long-0.11.0.md => 2021-11-10-release-0.11.0.md} |   1 -
 ...long-0.12.0.md => 2022-01-04-release-0.12.0.md} |   3 +-
 ...inlong-1.1.0.md => 2022-04-25-release-1.1.0.md} |   1 -
 ...ort_ETL_ch.md => 2022-06-16-inlong-sort-etl.md} | 173 +++++++++------------
 .../2022-06-22-release-1.2.0.md                    | 101 ++++++++++++
 .../current/data_node/extract_node/overview.md     |   4 +-
 11 files changed, 326 insertions(+), 191 deletions(-)

diff --git a/blog/apache-inlong-0.11.0.md b/blog/2021-11-10-release-0.11.0.md
similarity index 99%
rename from blog/apache-inlong-0.11.0.md
rename to blog/2021-11-10-release-0.11.0.md
index 47159635c..ed350f201 100644
--- a/blog/apache-inlong-0.11.0.md
+++ b/blog/2021-11-10-release-0.11.0.md
@@ -1,6 +1,5 @@
 ---
 title: Release InLong 0.11.0
-sidebar_position: 3
 ---
 
 Apache InLong (incubating)  has been renamed from the original Apache TubeMQ (incubating) from 0.9.0.  With the name change,  InLong has also been upgraded from a single message queue to a one-stop integration framework for massive data.  InLong supports data collection,  aggregation,  caching,  and sorting,  users can import data from the data source to the real-time computing engine or land to offline storage with a simple configuration.
diff --git a/blog/apache-inlong-0.12.0.md b/blog/2022-01-04-release-0.12.0.md
similarity index 99%
rename from blog/apache-inlong-0.12.0.md
rename to blog/2022-01-04-release-0.12.0.md
index be18b01fc..392215fe8 100644
--- a/blog/apache-inlong-0.12.0.md
+++ b/blog/2022-01-04-release-0.12.0.md
@@ -1,6 +1,5 @@
 ---
 title: Release InLong 0.12.0
-sidebar_position: 2
 ---
 
 InLong: the sacred animal in Chinese myths stories, draws rivers into the sea, as a metaphor for the InLong system to provide data access capabilities.
@@ -15,7 +14,6 @@ The 0.12.0-incubating just released mainly includes the following:
 
 This version closed about 120+ issues, including four major features and 35 improvements.
 
-
 ### Apache InLong(incubating) Introduction
 [Apache InLong](https://inlong.apache.org) is a one-stop integration framework for massive data donated by Tencent to the Apache community.  It provides automatic,  safe,  reliable,  and high-performance data transmission capabilities to facilitate the construction of streaming-based data analysis,  modeling,  and applications.  
 The Apache InLong project was originally called TubeMQ,  focusing on high-performance,  low-cost message queuing services.  In order to further release the surrounding ecological capabilities of TubeMQ,  we upgraded the project to InLong,  focusing on creating a one-stop integration framework for massive data.
@@ -36,7 +34,7 @@ Apache InLong serves the entire life cycle from data collection to landing,  and
 In version 0.12.0, we have completed the data reporting capability of FileAgent→DataProxy→Pulsar→Sort. So far, InLong supports high-performance and high-reliability data access scenarios: Compared with the high-throughput TubeMQ, Apache Pulsar can provide better data consistency and is more suitable for scenarios that require extremely high data reliability. For example, finance and billing.
 <img src="/img/pulsar-arch-en.png" align="center" alt="Report via Pulsar"/>
 
-Thanks to @healzhou, @baomingyu, @leezng, @bruceneenhl, @ifndef-SleePy and others for their contributions to this feature. For more information, please refer to [INLONG-1310](https://github.com/apache/)incubator-inlong/issues/1310), please refer to [Pulsar usage example](https://inlong.apache. org/zh -CN/docs/next/quick_start/pulsar_example/) to get the usage guide.
+Thanks to @healchow, @baomingyu, @leezng, @bruceneenhl, @ifndef-SleePy and others for their contributions to this feature. For more information, please refer to [INLONG-1310](https://github.com/apache/)incubator-inlong/issues/1310), please refer to [Pulsar usage example](https://inlong.apache. org/zh -CN/docs/next/quick_start/pulsar_example/) to get the usage guide.
 
 #### 2. Support JMX and Prometheus metrics
 In addition to the existing file output metrics, the various components of InLong began to gradually support the output of JMX and Prometheus metrics to improve the visibility of the entire system. Currently, modules including Agent, DataProxy, TubeMQ, Sort-Standalone, etc. already support the above metrics, and the documentation of metrics output by each module is being sorted out.
@@ -66,5 +64,3 @@ In subsequent versions, we will further enhance the capabilities of InLong to co
 - Support link for data access ClickHouse
 - Support DB data collection
 - The second stage full link indicator audit function
-
-
diff --git a/blog/apache-inlong-1.1.0.md b/blog/2022-04-25-release-1.1.0.md
similarity index 99%
rename from blog/apache-inlong-1.1.0.md
rename to blog/2022-04-25-release-1.1.0.md
index d82114aa4..a7d5fc0dd 100644
--- a/blog/apache-inlong-1.1.0.md
+++ b/blog/2022-04-25-release-1.1.0.md
@@ -1,6 +1,5 @@
 ---
 title: Release InLong 1.1.0
-sidebar_position: 1
 ---
 
 Apache InLong is a one-stop integration framework for massive data that provides automatic, secure and reliable data transmission capabilities. InLong supports both batch and stream data processing at the same time, which offers great power to build data analysis, modeling and other real-time applications based on streaming data.
diff --git a/blog/InLong_Sort_ETL_en.md b/blog/2022-06-16-inlong-sort-etl.md
similarity index 87%
rename from blog/InLong_Sort_ETL_en.md
rename to blog/2022-06-16-inlong-sort-etl.md
index 5a84f6247..27673bc68 100644
--- a/blog/InLong_Sort_ETL_en.md
+++ b/blog/2022-06-16-inlong-sort-etl.md
@@ -1,15 +1,14 @@
 ---
-title: Analysis of InLong Sort ETL Solution Based on Apache Flink SQL
-sidebar_position: 4
+title: Analysis of InLong Sort ETL Solution
 ---
 
 # Analysis of InLong Sort ETL Solution Based on Apache Flink SQL
 
-# 1. Background
+## 1. Background
 
 With the increasing number of users and developers of Apache InLong(incubating), the demand for richer usage scenarios and low-cost operation is getting stronger and stronger. Among them, the demand for adding Transform (T) to the whole link of InLong has received the most feedback. After the research and design of @yunqingmoswu, @EMsnap, @gong, @thexiay community developers, the InLong Sort ETL solution based on Flink SQL has been completed. This article will introduce the implementatio [...]
 
-First of all, based on Apache Flink SQL, there are mainly the following considerations:
+Firstly, based on Apache Flink SQL, there are mainly the following considerations:
 
 -  Flink SQL has high scalability and flexibility brought about by its powerful expression ability. Basically, Flink SQL can support most demand scenarios in the community. When the built-in functions of Flink SQL do not meet the requirements, we can also extend them through various UDFs.
 -  Compared with the implementation of the underlying API of Flink, the development cost of Flink SQL is lower. Only for the first time, the conversion logic of Flink SQL needs to be implemented. In the future, we can focus on the construction of the ability of Flink SQL, such as the extension connector and the UDF.
@@ -17,11 +16,11 @@ First of all, based on Apache Flink SQL, there are mainly the following consider
 - For users, Flink SQL is also easier to understand, especially for users who have used SQL, the usage is simple and familiar, which helps users to land quickly.
 - For the migration of existing real-time tasks, if they are originally SQL-type tasks, especially Flink SQL tasks, the migration cost is extremely low, and in some cases, no changes are even required.
 
-**Note**:  for all codes of this scheme, please refer to [Apache inlong sort]( https://github.com/apache/incubator-inlong/tree/master/inlong-sort )Module, which can be downloaded and used in the upcoming version 1.2.0.
+**Note**: For all codes of this scheme, please refer to [Apache InLong Sort](https://github.com/apache/incubator-inlong/tree/master/inlong-sort), which can be downloaded and used in the upcoming version 1.2.0.
 
-# 2. Introduction
+## 2. Introduction
 
-## 2.1 Requirements
+### 2.1 Requirements
 
 The main requirements of this solution are the completed inlong sort module transform (T) capability, including:
 
@@ -37,11 +36,11 @@ The main requirements of this solution are the completed inlong sort module tran
 |            Join             |                    Support two table join                    |
 |     Value substitution      | Given a matching value, if the field's value is equal to that value, replace it with the target value |
 
-## 2.2 Usage Scenarios
+### 2.2 Usage Scenarios
 
 Users of big data integration have transform requirements such as data transformation, connection and filtering in many business scenarios.
 
-## 2.3 Design Goal
+### 2.3 Design Goal
 
 This design needs to achieve the following goals:
 
@@ -50,13 +49,13 @@ This design needs to achieve the following goals:
 - Maintainability: The conversion of the InLong Sort data model to Flink SQL only needs to be implemented once. When there are new functional requirements later, this part does not need to be changed, even if there are changes, it can be supported with a small amount of changes.
 - Extensibility: When the open source Flink Connector or the built-in Flink SQL function does not meet the requirements, you can customize the Flink Connector and UDF to achieve its function expansion.
 
-## 2.4 Basic Concepts
+### 2.4 Basic Concepts
 
 The core concept refers to the explanation of terms in the outline design
 
 |            Name             |                           Meaning                            |
 | :-------------------------: | :----------------------------------------------------------: |
-|      InLong Dashborad       |            Inlong front end management interface             |
+|      InLong Dashboard       |            Inlong front end management interface             |
 |    InLong Manager Client    | Wrap the interface in the manager for external user programs to call without going through the front-end inlong dashboard |
 |   InLong Manager Openapi    |      Inlong manager and external system call interface       |
 |   InLong Manager metaData   | Inlong manager metadata management, including metadata information of group and stream dimensions |
@@ -65,7 +64,7 @@ The core concept refers to the explanation of terms in the outline design
 |        InLong Stream        |     Data flow: a data flow has a specific flow direction     |
 |        Stream Source        | There are corresponding acquisition end and sink end in the stream. This design only involves the stream source |
 |         Stream Info         | Abstract of data flow in sort, including various sources, transformations, destinations, etc. of the data flow |
-|         Group Info          | Encapsulation of data flow in sort. A groupinfo can contain multiple stream infos |
+|         Group Info          | Encapsulation of data flow in sort. A group info can contain multiple stream infos |
 |            Node             | Abstraction of data source, data transformation and data destination in data synchronization |
 |        Extract Node         |       Source side abstraction of data synchronization        |
 |          Load Node          |       Destination abstraction of data synchronization        |
@@ -83,38 +82,36 @@ The core concept refers to the explanation of terms in the outline design
 |         Field Info          |                          Node field                          |
 |       Meta FieldInfo        |                 Node meta information field                  |
 
-
-
-## 2.5 Domain Model
+### 2.5 Domain Model
 
 This design mainly involves the following entities: 
 
-Group、Stream、GroupInfo、StreamInfo、Node、NodeRelation、FieldRelation、Function、FilterFunction、SubstringFunction、FunctionParam、FieldInfo、MetaFieldInfo、MySQLExtractNode、KafkaLoadNode and etc.
+Group, Stream, GroupInfo, StreamInfo, Node, NodeRelation, FieldRelation, Function, FilterFunction, SubstringFunction, FunctionParam, FieldInfo, MetaFieldInfo, MySQLExtractNode, KafkaLoadNode, etc.
 
 For ease of understanding, this section will model and analyze the relationship between entities. Description of entity correspondence of domain model:
 
-- One group corresponds to one groupinfo
+- One group corresponds to one group info
 - A group contains one or more streams
-- One stream corresponds to one streaminfo
-- A groupinfo contains one or more streaminfo
-- A streaminfo contains multiple nodes
-- A streaminfo contains one or more NodeRelations
-- A noderelation contains one or more fieldrelations
-- A NodeRelation contains 0 or more filterfunctions
-- A fieldrelation contains one function or one fieldinfo as the source field and one fieldinfo as the target field
+- One stream corresponds to one StreamInfo
+- A GroupInfo contains one or more StreamInfo
+- A StreamInfo contains multiple nodes
+- A StreamInfo contains one or more NodeRelations
+- A NodeRelation contains one or more FieldRelations
+- A NodeRelation contains 0 or more FilterFunctions
+- A FieldRelation contains one function or one FieldInfo as the source field and one FieldInfo as the target field
 - A function contains one or more FunctionParams
 
 The above relationship can be represented by UML object relationship diagram as:
 
 ![sort_UML](./img/sort_UML.png)
 
-## 2.6 Function Use-case Diagram
+### 2.6 Function Use-case Diagram
 
 ![sort-usecase](./img/sort-usecase.png)
 
-# 3. System Outline Design
+## 3. System Outline Design
 
-## 3.1 System Architecture Diagram
+### 3.1 System Architecture Diagram
 
 ![architecture](./img/architecture.png)
 
@@ -128,27 +125,27 @@ The above relationship can be represented by UML object relationship diagram as:
 - Node: Abstraction of data source, data conversion and data destination in data synchronization
 - FlinkSQLParser: SQL parser
 
-## 3.2 InLong Sort Internal Operation Flow Chart
+### 3.2 InLong Sort Internal Operation Flow Chart
 
 ![sort-operation-flow](./img/sort-operation-flow.png)
 
-## 3.3 Module Design
+### 3.3 Module Design
 
-This design only adds Flink connector and flinksql generator to the original system, and modifies the data model module.
+This design only adds Flink connector and Flink SQL generator to the original system, and modifies the data model module.
 
-### 3.3.1 Module Structure
+#### 3.3.1 Module Structure
 
 ![sort-module-structure](./img/sort-module-structure.png)
 
-### 3.3.2 Module Division
+#### 3.3.2 Module Division
 
 Description of important module division:
 
 |       Name        |                         Description                          |
 | :---------------: | :----------------------------------------------------------: |
-|  FlinkSQLParser   | Used to generate flinksql core classes, including references to groupinfo |
-|     GroupInfo     | The internal abstraction of sort for inlong group is used to encapsulate the synchronization related information of the entire inlong group, including the reference to list\<streaminfo\> |
-|    StreamInfo     | The internal abstraction of sort to inlong stream is used to encapsulate inlong stream synchronization related information, including references to list\<node\>, list\<noderelation\> |
+|  FlinkSQLParser   | Used to generate Flink SQL core classes, including references to GroupInfo |
+|     GroupInfo     | The internal abstraction of sort for inlong group is used to encapsulate the synchronization related information of the entire inlong group, including the reference to list\<StreamInfo\> |
+|    StreamInfo     | The internal abstraction of sort to inlong stream is used to encapsulate inlong stream synchronization related information, including references to list\<node\>, list\<NodeRelation\> |
 |       Node        | The top-level interface of the synchronization node. Its subclass implementation is mainly used to encapsulate the data of the synchronization data source and the transformation node |
 |    ExtractNode    |      Data extract node abstraction, inherited from node      |
 |     LoadNode      |       Data load node abstraction, inherited from node        |
@@ -160,21 +157,19 @@ Description of important module division:
 | SubstringFunction | Used for string interception function abstraction, inherited from function |
 |   FunctionParam   |             Abstraction for function parameters              |
 |   ConstantParam   | Encapsulation of function constant parameters, inherited from FunctionParam |
-|     FieldInfo     | The encapsulation of node fields can also be used as function input parameters, inherited from functionparam |
-|   MetaFieldInfo   | The encapsulation of built-in fields is currently mainly used in the metadata field scenario of canal JSON, which is inherited from fieldinfo |
+|     FieldInfo     | The encapsulation of node fields can also be used as function input parameters, inherited from FunctionParam |
+|   MetaFieldInfo   | The encapsulation of built-in fields is currently mainly used in the metadata field scenario of canal JSON, which is inherited from FieldInfo |
 
-# 4. Detailed System Design
+## 4. Detailed System Design
 
 The following describes the principle of SQL generation by taking MySQL synchronizing data to Kafka as an example
 
-## 4.1 Node Described in SQL
+### 4.1 Node Described in SQL
 
-### 4.1.1 ExtractNode Described in SQL
+#### 4.1.1 ExtractNode Described in SQL
 
 The node configuration is:
 
-**nodeconfig1**
-
 ```java
  private Node buildMySQLExtractNode() {
         List<FieldInfo> fields = Arrays.asList(
@@ -190,8 +185,6 @@ The node configuration is:
 
 The generated SQL is:
 
-**ss**
-
 ```sql
 CREATE TABLE `mysql_1` (`name` string,`age` int) 
 with 
@@ -201,17 +194,12 @@ with
 'password' = 'password',
 'database-name' = 'inlong',
 'table-name' = 'tableName')
-
 ```
 
-
-
-### 4.1.2 TransformNode  Described in SQL
+#### 4.1.2 TransformNode  Described in SQL
 
 The node configuration is:
 
-**nodeconfig2**
-
 ```java
  List<FilterFunction> filters = Arrays.asList(
                 new SingleValueFilterFunction(EmptyOperator.getInstance(),
@@ -221,26 +209,18 @@ The node configuration is:
                         new FieldInfo("age", new IntFormatInfo()),
                         MoreThanOrEqualOperator.getInstance(), new ConstantParam(18))
         );
-
 ```
 
 The generated SQL is:
 
-**ss2**
-
 ```sql
 SELECT `name` AS `name`,`age` AS `age` FROM `mysql_1` WHERE `age` < 25 AND `age` >= 18
-
 ```
 
-
-
-### 4.1.3 LoadNode Described in SQL
+#### 4.1.3 LoadNode Described in SQL
 
 The node configuration is:
 
-**nodeconfig3**
-
 ```java
  private Node buildKafkaLoadNode(FilterStrategy filterStrategy) {
         List<FieldInfo> fields = Arrays.asList(
@@ -267,13 +247,10 @@ The node configuration is:
                 new CanalJsonFormat(), null,
                 null, "id");
     }
-
 ```
 
 The generated SQL is:
 
-**ss3**
-
 ```sql
 CREATE TABLE `kafka_3` (`name` string,`age` int) 
 with (
@@ -287,31 +264,23 @@ with (
 'canal-json-inlong.timestamp-format.standard' = 'SQL',
 'canal-json-inlong.map-null-key.literal' = 'null'
 )
-
 ```
 
+### 4.2 Field T Described in SQL
 
-
-## 4.2 Field T Described in SQL
-
-### 4.2.1 Filter operator
+#### 4.2.1 Filter operator
 
 See 4.1 node configuration for relevant configurations
 
 The generated SQL is:
 
-**ss4**
-
 ```sql
 INSERT INTO `kafka_3` SELECT `name` AS `name`,`age` AS `age` FROM `mysql_1` WHERE `age` < 25 AND `age` >= 18
-
 ```
 
-### 4.2.2 Watermark
+#### 4.2.2 Watermark
 
-The complete configuration of groupinfo is as follows:
-
-**nodeconfig3**
+The complete configuration of GroupInfo is as follows:
 
 ```java
 private Node buildMySqlExtractNode() {
@@ -360,4 +329,3 @@ private Node buildMySqlExtractNode() {
         return new GroupInfo("1", Collections.singletonList(streamInfo));
     }
 ```
-
diff --git a/blog/2022-06-22-release-1.2.0.md b/blog/2022-06-22-release-1.2.0.md
new file mode 100644
index 000000000..efd0f114a
--- /dev/null
+++ b/blog/2022-06-22-release-1.2.0.md
@@ -0,0 +1,108 @@
+---
+title: Release InLong 1.2.0
+---
+
+Apache InLong is a one-stop integration framework for massive data that provides automatic, secure and reliable data transmission capabilities.
+InLong supports both batch and stream data processing at the same time, which offers great power to build data analysis, modeling and other real-time applications based on streaming data.
+
+## 1.2.0 Features Overview
+**The just-released 1.2.0-incubating version closes about 410+ issues, contains 30+ features and 190+ optimizations.**
+Mainly include the following:
+
+### Enhance management and control capabilities
+- Dashboard and Manager add cluster management capabilities
+- Dashboard optimizes the flow creation process
+- Manager supports plug-in extension of MQ
+
+### Extended collection node
+- Support for collecting data in Pulsar
+- Support data collection in MongoDB-CDC
+- Support data collection in MySQL-CDC
+- Support data collection in Oracle-CDC
+- Support data collection in PostgreSQL-CDC
+- Support data collection in SQLServer-CDC
+
+### Extended write node
+- Support for writing data to Kafka
+- Support for writing data to HBase
+- Support for writing data to PostgreSQL
+- Support for writing data to Oracle
+- Supports writing data to MySQL
+- Support writing data to TDSQL-PostgreSQL
+- Support for writing data to Greenplum
+- Supports writing data to SQLServer
+
+### Support data conversion
+- Support String Split
+- Support String Regular Replace
+- Support String Regular Replace First Matched Value
+- Support Data Filter
+- Support Data Distinct
+- Support Regular Join
+
+### Enhanced system monitoring function
+- Support the reporting and management of data link heartbeat
+
+### Other optimizations
+- Supports the delivery of DataProxy multi-cluster configurations
+- GitHub Action check, pipeline optimization
+
+## 1.2.0 Features Details
+
+### Support multi-cluster management
+Manager adds cluster management function, supports multi-cluster configuration, and solves the limitation that only one set of clusters can be defined through configuration files.
+Users can create different types of clusters on Dashboard as needed.
+
+The multi-cluster feature is mainly designed and implemented by @healchow, @luchunliang, @leezng, thanks to three contributors.
+
+### Enhanced collection of file data and MySQL Binlog
+Version 1.2.0 supports collecting complete file data, and also supports collecting data from the specified Binlog location in MySQL. This part of the work was done by @Greedyu.
+
+### Support whole database migration
+Sort supports migration of data across the entire database, contributed by @EMsnap.
+
+### Supports writing data in Canal format
+Support for writing data in Canal format to Kafka, contributed by @thexiay.
+
+### Optimize the HTTP request method in Manager Client
+Optimized the way of executing HTTP requests in Manager Client, and added unit tests for Client, which reduces maintenance costs while reducing duplication of code.
+This feature was contributed by new contributor @leosanqing.
+
+### Supports running SQL scripts
+Sort supports running SQL scripts, see [INLONG-4405](https://github.com/apache/inlong/issues/4405), thanks to @gong for contributing this feature.
+
+### Support the reporting and management of data link heartbeat
+This version supports the heartbeat reporting and management of data grouping, data flow and underlying components, which is the premise of the state management of each link of the subsequent system.
+
+This feature was primarily designed and contributed by @baomingyu, @healchow and @kipshi.
+
+### Manager supports the creation of resources in multiple flow directions
+In version 1.2.0, Manager added the creation of some storage resources:
+
+- Create Topic for Kafka (contributed by @woofyzhao)
+- Create databases and tables for Iceberg (contributed by @woofyzhao)
+- Create namespaces and tables for HBase (contributed by @woofyzhao)
+- Create databases and tables for ClickHouse (contributed by @lucaspeng12138)
+- Create indices for Elasticsearch (contributed by @lucaspeng12138)
+- Create databases and tables for PostgreSQL (contributed by @baomingyu)
+
+### Sort supports lightweight architecture
+Version 1.2.0 of Sort has done a lot of refactoring and improvements.
+By introducing Flink-CDC, it supports a variety of Extract and Load nodes, and also supports data transformation (ie Transform).
+
+This feature contains many sub-features. The main developers are:
+@baomingyu, @EMsnap, @GanfengTan, @gong, @lucaspeng12138, @LvJiancheng, @kipshi, @thexiay, @woofyzhao, @yunqingmoswu, thank you all for your contributions.
+
+For more information, please refer to: [Analysis of InLong Sort ETL Solution](2022-06-16-inlong-sort-etl.md).
+
+### Other features and bug fixes
+For related content, please refer to the [Release Notes](https://github.com/apache/inlong/blob/master/CHANGES.md), which details the features, enhancements and bug fixes of this release.
+
+## Apache InLong follow-up planning
+
+In subsequent versions, we will expand more data sources and storages to cover more usage scenarios, and gradually improve the usability and robustness of the system, including:
+
+- Heartbeat report of each component
+- Status management of data flow
+- Full link audit support for writing to ClickHouse
+- Expand more types of acquisition nodes and storage nodes
diff --git a/i18n/zh-CN/docusaurus-plugin-content-blog/apache-inlong-0.11.0.md b/i18n/zh-CN/docusaurus-plugin-content-blog/2021-11-10-release-0.11.0.md
similarity index 99%
rename from i18n/zh-CN/docusaurus-plugin-content-blog/apache-inlong-0.11.0.md
rename to i18n/zh-CN/docusaurus-plugin-content-blog/2021-11-10-release-0.11.0.md
index e3571dbc4..309b137f6 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-blog/apache-inlong-0.11.0.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-blog/2021-11-10-release-0.11.0.md
@@ -1,6 +1,5 @@
 ---
 title: 0.11.0 版本发布
-sidebar_position: 3
 ---
 
 Apache InLong(incubating) 从 0.9.0 版本开始由原 Apache TubeMQ(incubating)改名而来,伴随着名称的改变,InLong 也由单一的消息队列升级为一站式海量数据集成框架,支持了大数据领域的采集、汇聚、缓存和分拣功能,用户只需要简单的配置就可以把数据从数据源导入到实时计算引擎或者落地到离线存储。
diff --git a/i18n/zh-CN/docusaurus-plugin-content-blog/apache-inlong-0.12.0.md b/i18n/zh-CN/docusaurus-plugin-content-blog/2022-01-04-release-0.12.0.md
similarity index 99%
rename from i18n/zh-CN/docusaurus-plugin-content-blog/apache-inlong-0.12.0.md
rename to i18n/zh-CN/docusaurus-plugin-content-blog/2022-01-04-release-0.12.0.md
index d8f06eac0..5e1d5900f 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-blog/apache-inlong-0.12.0.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-blog/2022-01-04-release-0.12.0.md
@@ -1,6 +1,5 @@
 ---
 title: 0.12.0 版本发布
-sidebar_position: 2
 ---
 
 InLong(应龙) : 中国神话故事里的神兽,引流入海,借喻 InLong 系统提供数据接入能力。
@@ -35,7 +34,7 @@ Apache InLong 以腾讯内部使用的 TDBank 为原型,具有万亿级数据
 在 0.12.0 版本中,我们补齐了 FileAgent → DataProxy → Pulsar → Sort 的数据上报能力,至此,InLong 支持高性能和高可靠数据接入场景:相比高吞吐的 TubeMQ,Apache Pulsar能提供更好的数据一致性,更适用于金融、计费等对数据可靠性要求极高的场景。
 <img src="/img/pulsar-arch-zh.png" align="center" alt="Report via Pulsar"/>
 
-感谢 @healzhou、 @baomingyu、@leezng、@bruceneenhl、@ifndef-SleePy 等同学对这个特性的贡献,更多信息请参考,相关 PR 见 [INLONG-1310](https://github.com/apache/incubator-inlong/issues/1310) ,使用指引见 [Pulsar使用示例](https://inlong.apache.org/zh-CN/docs/next/quick_start/pulsar_example/) 。
+感谢 @healchow、 @baomingyu、@leezng、@bruceneenhl、@ifndef-SleePy 等同学对这个特性的贡献,更多信息请参考,相关 PR 见 [INLONG-1310](https://github.com/apache/incubator-inlong/issues/1310) ,使用指引见 [Pulsar使用示例](https://inlong.apache.org/zh-CN/docs/next/quick_start/pulsar_example/) 。
 
 #### 2. 支持 JMX 和 Prometheus 指标
 在已有的以文件输出指标之外,InLong 的各个组件开始逐步支持 JMX 和 Prometheus 指标的输出,以提升整个系统的可见性。目前包括 Agent,DataProxy,TubeMQ,Sort-Standalone 等模块已经支持上述指标,各个模块输出的指标的文档正在整理中。
diff --git a/i18n/zh-CN/docusaurus-plugin-content-blog/apache-inlong-1.1.0.md b/i18n/zh-CN/docusaurus-plugin-content-blog/2022-04-25-release-1.1.0.md
similarity index 98%
rename from i18n/zh-CN/docusaurus-plugin-content-blog/apache-inlong-1.1.0.md
rename to i18n/zh-CN/docusaurus-plugin-content-blog/2022-04-25-release-1.1.0.md
index aa5b678da..bf64cbbda 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-blog/apache-inlong-1.1.0.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-blog/2022-04-25-release-1.1.0.md
@@ -1,6 +1,5 @@
 ---
 title: 1.1.0 版本发布
-sidebar_position: 1
 ---
 
 Apache InLong(应龙)是一个一站式海量数据集成框架,提供自动、安全、可靠和高性能的数据传输能力,同时支持批和流,方便业务构建基于流式的数据分析、建模和应用。InLong支持大数据领域的采集、汇聚、缓存和分拣功能,用户只需要简单的配置就可以把数据从数据源导入到实时计算引擎或者落地到离线存储。
diff --git a/i18n/zh-CN/docusaurus-plugin-content-blog/InLong_Sort_ETL_ch.md b/i18n/zh-CN/docusaurus-plugin-content-blog/2022-06-16-inlong-sort-etl.md
similarity index 79%
rename from i18n/zh-CN/docusaurus-plugin-content-blog/InLong_Sort_ETL_ch.md
rename to i18n/zh-CN/docusaurus-plugin-content-blog/2022-06-16-inlong-sort-etl.md
index 641163f2d..79498725b 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-blog/InLong_Sort_ETL_ch.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-blog/2022-06-16-inlong-sort-etl.md
@@ -1,11 +1,10 @@
 ---
-title: 基于 Apache Flink SQL 的 InLong Sort ETL 方案解析
-sidebar_position: 4
+title: InLong Sort ETL 方案解析
 ---
 
 # 基于 Apache Flink SQL 的 InLong Sort ETL 方案解析
 
-# 一、背景
+## 1. 背景
 
 随着 Apache InLong(incubating) 的用户和开发者逐渐增多,更丰富的使用场景和低成本运营诉求越来越强烈,其中,InLong 全链路增加 Transform(T)的需求反馈最多。经过@yunqingmoswu、@EMsnap、@gong、@thexiay 社区开发者的调研和设计,完成了基于 Flink SQL 的 InLong Sort ETL 方案,本文将详细介绍该方案的实现细节。
 
@@ -17,11 +16,11 @@ sidebar_position: 4
 - 对用户来说,Flink SQL 也更加通俗易懂,特别是对使用过 SQL 用户来说,使用方式简单、熟悉,这有助于用户快速落地。
 - 对于存量实时任务的迁移,如果其原本就是 SQL 类型的任务,尤其是 Flink SQL 任务,其迁移成本极低,部分情况下甚至都不用做任何改动。
 
-注意:本方案的所有代码,可以参考[ Apache InLong Sort ](https://github.com/apache/incubator-inlong/tree/master/inlong-sort)模块,所含功能可在即将发布的 1.2.0 版本中下载使用。
+注意:本方案的所有代码,可以参考 [Apache InLong Sort](https://github.com/apache/incubator-inlong/tree/master/inlong-sort) 模块,所含功能可在即将发布的 1.2.0 版本中下载使用。
 
-# 二、方案介绍
+## 2. 方案介绍
 
-## 2.1 方案需求
+### 2.1 方案需求
 
 该方案的主要需求,是完成的 InLong Sort 模块 Transform(T)能力,包括:
 
@@ -37,11 +36,11 @@ sidebar_position: 4
 |     连接     |                       支持两表 Join                        |
 |    值替换    | 给定一个匹配值,如果该字段的值等于该值,则将其替换为目标值 |
 
-## 2.2 使用场景
+### 2.2 使用场景
 
 大数据集成的用户,在很多业务场景下都有数据转换、连接、过滤等 Transform 需求。
 
-## 2.3 设计目标
+### 2.3 设计目标
 
 本次设计需要达到以下目标:
 
@@ -50,7 +49,7 @@ sidebar_position: 4
 - 可维护性:InLong Sort 数据模型转 Flink SQL 只需实现一遍,后期有新增的功能需求时,这块不需要改动,哪怕有改动也是少量改动即可支持。
 - 可扩展性:当出现开源 Flink Connector 或者内置 Flink SQL 函数不满足需求时,可通过自定义 Flink Connector、UDF 来实现其功能扩展。
 
-## 2.4 基本概念
+### 2.4 基本概念
 
 核心概念参照概要设计中的名词解释
 
@@ -58,24 +57,24 @@ sidebar_position: 4
 | :-------------------------: | :----------------------------------------------------------: |
 |      InLong Dashborad       |                     Inlong 前端管理界面                      |
 |    InLong Manager Client    | 将 Manager 当中的接口进行包装,供外部用户程序调用,不经过前端 InLong Dashboard |
-|   InLong Manager Openapi    |               Inlong manager与外部系统调用接口               |
-|   InLong Manager metaData   | Inlong manager 元数据管理,包括group、stream纬度的元数据信息 |
+|   InLong Manager Openapi    |               Inlong manager 与外部系统调用接口               |
+|   InLong Manager metaData   | Inlong manager 元数据管理,包括 group、stream 纬度的元数据信息 |
 | InLong Manager task manager | Inlong manager中管理数据源采集任务模块,管理agent的任务下发,指令下发,心跳上报 |
-|        InLong Group         |     数据流组,包含多个数据流,一个Group 代表一个数据接入     |
+|        InLong Group         |     数据流组,包含多个数据流,一个 Group 代表一个数据接入     |
 |        InLong Stream        |                数据流,一个数据流有具体的流向                |
-|        Stream Source        |  流中有对应的采集端和sink端,本设计中只涉及到 stream source  |
-|         Stream Info         |  Sort中数据流向的抽象,包含该数据流的各种来源、转换、去向等  |
-|         Group Info          |  Sort中对数据流向的封装,一个GroupInfo可包含多个Stream Info  |
+|        Stream Source        |  流中有对应的采集端和 sink 端,本设计中只涉及到 stream source  |
+|         Stream Info         |  Sort 中数据流向的抽象,包含该数据流的各种来源、转换、去向等  |
+|         Group Info          |  Sort 中对数据流向的封装,一个 GroupInfo 可包含多个 Stream Info  |
 |            Node             |          数据同步中数据源、数据转换、数据去向的抽象          |
 |        Extract Node         |                     数据同步的来源端抽象                     |
 |          Load Node          |                     数据同步的去向端抽象                     |
 |     MySQL Extract Node      |                      MySQL 数据来源抽象                      |
-|       Kafka Load Node       |                      kafka 数据去向抽象                      |
+|       Kafka Load Node       |                      Kafka 数据去向抽象                      |
 |       Transform Node        |                    数据同步的转换过程抽象                    |
 |  Aggregate Transform Node   |                  数据同步聚合类转换过程抽象                  |
 |        Node Relation        |                  数据同步中各个节点关系抽象                  |
 |       Field Relation        |             数据同步中上下游节点字段间关系的抽象             |
-|          Function           |       转换函数的抽象,即数据同步T中各个T能力实现的抽象       |
+|          Function           |       转换函数的抽象,即数据同步T中各个 T 能力实现的抽象       |
 |     Substring Function      |                     字符串截取函数的抽象                     |
 |       Filter Function       |                      数据过滤函数的抽象                      |
 |       Function Param        |                        函数的入参抽象                        |
@@ -83,98 +82,94 @@ sidebar_position: 4
 |         Field Info          |                           节点字段                           |
 |       Meta FieldInfo        |                        节点元信息字段                        |
 
+### 2.5 领域模型
 
-
-## 2.5 领域模型
-
-本次设计主要涉及到以下实体:
+本次设计主要涉及到以下实体:
 
 Group、Stream、GroupInfo、StreamInfo、Node、NodeRelation、FieldRelation、Function、FilterFunction、SubstringFunction、FunctionParam、FieldInfo、MetaFieldInfo、MySQLExtractNode、KafkaLoadNode 等
 
 为了便于理解,本小节将对实体之间关系进行建模分析。领域模型的实体对应关系说明:
 
-- 一个 Group 对应一个 GroupInfo
-- 一个 Group 包含一个或者多个 Stream
-- 一个 Stream 对应一个 StreamInfo
-- 一个 GroupInfo 包含一个或者多个 StreamInfo
+- 一个 Group 对应 1 个 GroupInfo
+- 一个 Stream 对应 1 个 StreamInfo
+- 一个 Group 包含 1 个或多个 Stream
+- 一个 GroupInfo 包含 1 个或多个 StreamInfo
 - 一个 StreamInfo 包含多个 Node
-- 一个 StreamInfo 包含 1 个或者多个 NodeRelation
-- 一个 NodeRelation 包含 1 个或者多个 FieldRelation
-- 一个 NodeRelation 包含 0 个或者多个 FilterFunction
-- 一个 FieldRelation 包含 1 个Function或者一个 FieldInfo 作为来源字段,一个 FieldInfo 作为目标字段
-- 一个 Function 包含 1 个或者多个 FunctionParam
+- 一个 StreamInfo 包含 1 个或多个 NodeRelation
+- 一个 NodeRelation 包含 1 个或多个 FieldRelation
+- 一个 NodeRelation 包含 0 个或多个 FilterFunction
+- 一个 FieldRelation 包含 1 个 Function 或 1 个 FieldInfo 作为来源字段,1 个 FieldInfo 作为目标字段
+- 一个 Function 包含 1 个或多个 FunctionParam
 
-上述关系由UML对象关系图可以表示为:
+上述关系由 UML 对象关系图可以表示为:
 
 ![sort_UML](./img/sort_UML.png)
 
-## 2.6 功能用例图
+### 2.6 功能用例图
 
 ![sort-usecase](./img/sort-usecase.png)
 
-# 三、系统概要设计
+## 3. 系统概要设计
 
-## 3.1 系统架构图
+### 3.1 系统架构图
 
 ![architecture](./img/architecture.png)
 
-- Serialization: 序列化实现模块
-- Deserialization: 反序列化实现模块
-- Flink Source: 自定义Flink source实现模块
-- Flink Sink:自定义的Flink sink实现模块
-- Transformation: 自定义的Transform实现模块
-- GroupInfo: 对应 Inlong group
-- StreamInfo: 对应 inlong stream
-- Node: 对数据同步中数据来源、数据转换、数据去向的抽象
-- FlinkSQLParser: SQL解析器
+- Serialization:序列化实现模块
+- Deserialization:反序列化实现模块
+- Flink Source:自定义 Flink source实现模块
+- Flink Sink:自定义的 Flink sink 实现模块
+- Transformation:自定义的 Transform 实现模块
+- GroupInfo:对应 Inlong group
+- StreamInfo:对应 Inlong stream
+- Node:对数据同步中数据来源、数据转换、数据去向的抽象
+- FlinkSQLParser:SQL 解析器
 
-## 3.2 InLong Sort 内部运行流程图
+### 3.2 InLong Sort 内部运行流程图
 
 ![](./img/sort-operation-flow.png)
 
-## 3.3 模块设计
+### 3.3 模块设计
 
 本次设计只对原有系统增加 Flink Connector、FlinkSQL Generator 两个模块,对 Data Model 模块有修改。
 
-### 3.3.1 模块结构
+#### 3.3.1 模块结构
 
 ![](./img/sort-module-structure.png)
 
-### 3.3.2 模块划分
+#### 3.3.2 模块划分
 
 重要模块划分说明:
 
 |       名称        |                             说明                             |
 | :---------------: | :----------------------------------------------------------: |
 |  FlinkSQLParser   |       用于生成 FlinkSQL 核心类,包含 GroupInfo 的引用        |
-|     GroupInfo     | Sort内部对 inlong group 的抽象,用于封装整个 inlong group 同步相关信息,包含对 List\<StreamInfo\> 的引用 |
-|    StreamInfo     | Sort内部对 inlong stream 的抽象,用于封装 inlong stream 同步相关信息,包含List\<Node\>、List\<NodeRelation\> 的引用 |
+|     GroupInfo     | Sort 内部对 InlongGroup 的抽象,用于封装整个 InlongGroup 同步相关信息,包含对 List\<StreamInfo\> 的引用 |
+|    StreamInfo     | Sort 内部对 InlongStream 的抽象,用于封装 InlongStream 同步相关信息,包含List\<Node\>、List\<NodeRelation\> 的引用 |
 |       Node        | 同步节点的顶层接口,它的各个子类实现主要用于对同步数据源、转换节点的数据封装 |
-|    ExtractNode    |               数据extract节点抽象,继承自 Node                |
-|     LoadNode      |                 数据load节点抽象,继承自 Node                 |
-|   TransformNode   |                数据转换节点抽象,继承自 Node                 |
+|    ExtractNode    |               数据extract节点抽象,继承自 Node                |
+|     LoadNode      |                 数据load节点抽象,继承自 Node                 |
+|   TransformNode   |                数据转换节点抽象,继承自 Node                  |
 |   NodeRelation    |                       定义节点间的关系                       |
 |   FieldRelation   |                     定义节点间字段的关系                     |
 |     Function      |                     T能力执行函数的抽象                      |
-|  FilterFunction   |         用于数据过滤的 Function 抽象,继承自 Function         |
-| SubstringFunction |         用于字符串截取 Function 抽象,继承自 Function         |
+|  FilterFunction   |         用于数据过滤的 Function 抽象,继承自 Function         |
+| SubstringFunction |         用于字符串截取 Function 抽象,继承自 Function         |
 |   FunctionParam   |                      用于函数参数的抽象                      |
-|   ConstantParam   |           函数常量参数的封装,继承自 FunctionParam           |
-|     FieldInfo     |   节点字段的封装,也可做函数入参使用,继承自 FunctionParam    |
-|   MetaFieldInfo   | 内置字段的封装,目前主要用于 canal-json 的元数据字段场景,继承自 FieldInfo |
+|   ConstantParam   |           函数常量参数的封装,继承自 FunctionParam            |
+|     FieldInfo     |   节点字段的封装,也可做函数入参使用,继承自 FunctionParam       |
+|   MetaFieldInfo   | 内置字段的封装,目前主要用于 canal-json 的元数据字段场景,继承自 FieldInfo |
 
-# 四、系统详细设计
+## 4. 系统详细设计
 
-下面具体以 MySQL 同步数据到Kafka为例来说明 SQL 生成的原理
+下面以同步 MySQL 中的数据到 Kafka 为例来说明 SQL 的生成原理。
 
-## 4.1 Node 生成 SQL
+### 4.1 Node 生成 SQL
 
-### 4.1.1 ExtractNode 生成 SQL
+#### 4.1.1 ExtractNode 生成 SQL
 
 节点配置为:
 
-**nodeconfig1**
-
 ```java
  private Node buildMySQLExtractNode() {
         List<FieldInfo> fields = Arrays.asList(
@@ -188,9 +183,7 @@ Group、Stream、GroupInfo、StreamInfo、Node、NodeRelation、FieldRelation、
     }
 ```
 
-生成的SQL为:
-
-**ss**
+生成的 SQL 为:
 
 ```sql
 CREATE TABLE `mysql_1` (`name` string,`age` int) 
@@ -201,16 +194,11 @@ with
 'password' = 'password',
 'database-name' = 'inlong',
 'table-name' = 'tableName')
-
 ```
 
+#### 4.1.2 TransformNode 生成 SQL
 
-
-### 4.1.2 TransformNode 生成 SQL
-
-节点配置为:
-
-**nodeconfig2**
+节点配置为:
 
 ```java
  List<FilterFunction> filters = Arrays.asList(
@@ -221,25 +209,17 @@ with
                         new FieldInfo("age", new IntFormatInfo()),
                         MoreThanOrEqualOperator.getInstance(), new ConstantParam(18))
         );
-
 ```
 
-生成的SQL为:
-
-**ss2**
+生成的 SQL 为:
 
 ```sql
 SELECT `name` AS `name`,`age` AS `age` FROM `mysql_1` WHERE `age` < 25 AND `age` >= 18
-
 ```
 
+#### 4.1.3 LoadNode 生成 SQL
 
-
-### 4.1.3 LoadNode 生成 SQL
-
-节点配置为:
-
-**nodeconfig3**
+节点配置为:
 
 ```java
  private Node buildKafkaLoadNode(FilterStrategy filterStrategy) {
@@ -267,14 +247,9 @@ SELECT `name` AS `name`,`age` AS `age` FROM `mysql_1` WHERE `age` < 25 AND `age`
                 new CanalJsonFormat(), null,
                 null, "id");
     }
-
 ```
 
-
-
-生成的SQL为:
-
-**ss3**
+生成的 SQL 为:
 
 ```sql
 CREATE TABLE `kafka_3` (`name` string,`age` int) 
@@ -289,29 +264,23 @@ with (
 'canal-json-inlong.timestamp-format.standard' = 'SQL',
 'canal-json-inlong.map-null-key.literal' = 'null'
 )
-
 ```
 
-## 4.2 字段 T 生成 SQL
+### 4.2 字段 T 生成 SQL
 
-### 4.2.1 过滤算子
+#### 4.2.1 过滤算子
 
 相关配置见 4.1 节点配置
 
-生成的SQL分别为:
-
-**ss4**
+生成的 SQL 为:
 
 ```sql
 INSERT INTO `kafka_3` SELECT `name` AS `name`,`age` AS `age` FROM `mysql_1` WHERE `age` < 25 AND `age` >= 18
-
 ```
 
-### 4.2.2 水位线
+#### 4.2.2 水位线
 
-GroupInfo 完整配置如下:
-
-**nodeconfig3**
+GroupInfo 完整配置如下:
 
 ```java
 private Node buildMySqlExtractNode() {
@@ -359,6 +328,4 @@ private Node buildMySqlExtractNode() {
                 buildNodeRelation(Collections.singletonList(input), Collections.singletonList(output))));
         return new GroupInfo("1", Collections.singletonList(streamInfo));
     }
-
 ```
-
diff --git a/i18n/zh-CN/docusaurus-plugin-content-blog/2022-06-22-release-1.2.0.md b/i18n/zh-CN/docusaurus-plugin-content-blog/2022-06-22-release-1.2.0.md
new file mode 100644
index 000000000..c3e0cfa26
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-blog/2022-06-22-release-1.2.0.md
@@ -0,0 +1,101 @@
+---
+title: 1.2.0 版本发布
+---
+
+Apache InLong(应龙)是一个一站式海量数据集成框架,提供自动、安全、可靠和高性能的数据传输能力,同时支持批和流,方便业务构建基于流式的数据分析、建模和应用。
+InLong 支持大数据领域的采集、汇聚、缓存和分拣功能,用户只需要简单的配置就可以把数据从数据源导入到实时计算引擎或者落地到离线存储。
+
+## 1.2.0 版本特性总览
+**刚刚发布的 1.2.0-incubating 版本关闭了约 410+ 个 issue,包含 30+ 个特性和 190+ 个优化。**
+主要包括以下内容:
+
+### 增强管控能力
+- Dashboard 和 Manager 增加集群管理能力
+- Dashboard 优化数据流的创建流程
+- Manager 支持 MQ 的插件化扩展
+
+### 扩展采集节点
+- 支持采集 Pulsar 中的数据
+- 支持采集 MongoDB-CDC 中的数据
+- 支持采集 MySQL-CDC 中的数据
+- 支持采集 Oracle-CDC 中的数据
+- 支持采集 PostgreSQL-CDC 中的数据
+- 支持采集 SQLServer-CDC 中的数据
+
+### 扩展写入节点
+- 支持将数据写入 Kafka
+- 支持将数据写入 HBase
+- 支持将数据写入 PostgreSQL
+- 支持将数据写入 Oracle
+- 支持将数据写入 MySQL
+- 支持将数据写入 TDSQL-PostgreSQL
+- 支持将数据写入 Greenplum
+- 支持将数据写入 SQLServer
+
+### 支持数据转换
+- 支持字符串切割
+- 支持字符串正则替换
+- 支持字符串正则替换第一个匹配的值
+- 支持数据过滤
+- 支持数据去重
+- 支持 Regular Join
+
+### 增强系统监控功能
+- 支持数据链路心跳的上报和管理
+
+### 其他优化
+- 支持 DataProxy 多集群配置的下发
+- GitHub Action 检查、流水线优化
+
+## 1.2.0 版本特性介绍
+
+### 支持多集群管理
+Manager 增加了集群管理功能,支持多集群配置,解决了只能通过配置文件定义一套集群的限制,用户可根据需要在 Dashboard 创建不同类型的集群。
+
+多集群功能主要由 @healchow、@luchunliang、@leezng 设计和实现,感谢三位贡献者。
+
+### 增强对文件数据和 MySQL Binlog 的采集
+1.2.0 版本支持采集完整的文件数据,同时也支持从 MySQL 的指定 Binlog 位置开始采集数据。该部分工作由 @Greedyu 完成。
+
+### 支持整库迁移
+Sort 支持对整个数据库中的数据进行迁移,此特性由 @EMsnap 贡献。
+
+### 支持写入 Canal 格式的数据
+支持向 Kafka 写入 Canal 格式的数据,此特性由 @thexiay 贡献。
+
+### 优化 Manager Client 中的 HTTP 请求方式
+优化了 Manager Client 中执行 HTTP 请求的方式,并为 Client 增加单元测试,在减少重复代码的同时,降低维护成本。
+此特性由新加入的贡献者 @leosanqing 贡献。
+
+### 支持运行 SQL 脚本
+Sort 支持运行 SQL 脚本,详见 [INLONG-4405](https://github.com/apache/inlong/issues/4405) ,感谢 @gong 贡献此特性。
+
+### 支持数据链路心跳的上报和管理
+此版本支持数据分组、数据流及底层组件的心跳上报和管理,是后续系统各环节的状态管理的前提。此特性主要由 @baomingyu、@healchow 和 @kipshi 设计和贡献。
+
+### Manager 支持创建多种流向的资源
+1.2.0 版本中 Manager 增加了对部分存储资源的创建:
+
+- 创建 Kafka 的 Topic(@woofyzhao 贡献)
+- 创建 Iceberg 的库和表(@woofyzhao 贡献)
+- 创建 HBase 的命名空间和表(@woofyzhao 贡献)
+- 创建 ClickHouse 的库和表(@lucaspeng12138 贡献)
+- 创建 Elasticsearch 的索引(@lucaspeng12138 贡献)
+- 创建 PostgreSQL 的库和表(@baomingyu 贡献)
+
+### Sort 支持轻量化架构
+1.2.0 版本的 Sort 做了大量重构和提升,通过引入 Flink-CDC,支持多种 Extract 和 Load 节点,同时也支持数据的转换(即 Transform)。
+
+此特性包含非常多的子特性,主要的开发者有:@baomingyu,@EMsnap,@GanfengTan,@gong,@lucaspeng12138,@LvJiancheng,@kipshi,@thexiay,@woofyzhao,@yunqingmoswu,感谢各位的贡献。
+
+更多特性信息,请参考:[InLong Sort ETL 方案解析](./2022-06-16-inlong-sort-etl.md)。
+
+### 其他特性及问题修复
+相关内容请参考 [版本说明](https://github.com/apache/inlong/blob/master/CHANGES.md) ,其中详细列出了此版本的特性、提升和 Bug 修复。
+
+## Apache InLong 后续规划
+后续版本,我们扩展更多的数据源端和存储端,覆盖更多的使用场景,并逐步提升系统的易用性和健壮性,主要包括:
+- 各组件的心跳上报
+- 数据链路的状态管理
+- 全链路审计支持写入 ClickHouse
+- 扩展更多类型的采集节点和存储节点
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/overview.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/overview.md
index c113f1902..4d473d9ad 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/overview.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data_node/extract_node/overview.md
@@ -36,7 +36,7 @@ Extract 节点列表是一组基于 <a href="https://flink.apache.org/">Apache F
 - 将下载并解压后的 Sort Connectors jars 放到 `FLINK_HOME/lib/`。
 - 重启 Flink 集群。
 
-下面例子展示了如何在 [Flink SQL Client](https://ci.apache.org/projects/flink/flink-docs-release-1.13/dev/table/sqlClient.html) 创建 MySQL Extarct 节点,并从中查询数据:
+下面例子展示了如何在 [Flink SQL Client](https://ci.apache.org/projects/flink/flink-docs-release-1.13/dev/table/sqlClient.html) 创建 MySQL Extract 节点,并从中查询数据:
 
 ```sql
 -- 创建一个 MySQL Extract 节点
@@ -57,4 +57,4 @@ CREATE TABLE mysql_extract_node (
 );
 
 SELECT id, name, age, weight FROM mysql_extract_node;
-```
\ No newline at end of file
+```