You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@linkis.apache.org by ca...@apache.org on 2023/03/28 02:11:28 UTC

[linkis-website] branch dev updated: update token desc (#690)

This is an automated email from the ASF dual-hosted git repository.

casion pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/linkis-website.git


The following commit(s) were added to refs/heads/dev by this push:
     new ec420f5be4 update token desc (#690)
ec420f5be4 is described below

commit ec420f5be4da61ca0b07df194f19d593aac05b4c
Author: aiceflower <ki...@gmail.com>
AuthorDate: Tue Mar 28 10:11:22 2023 +0800

    update token desc (#690)
    
    * update token and remove v
    
    * update token and remove v
    
    * update release note
    
    * add token modify backgrand
    
    ---------
    
    Co-authored-by: aiceflower <ki...@sina.com>
---
 docs/about/release.md                              |  23 -
 docs/deployment/deploy-quick.md                    |  66 ++-
 docs/engine-usage/elasticsearch.md                 |   2 +-
 docs/engine-usage/flink.md                         |   2 +-
 docs/engine-usage/jdbc.md                          |   2 +-
 docs/engine-usage/openlookeng.md                   |   2 +-
 docs/engine-usage/pipeline.md                      |   2 +-
 docs/engine-usage/presto.md                        |   2 +-
 docs/engine-usage/seatunnel.md                     |   2 +-
 docs/engine-usage/spark.md                         | 541 --------------------
 docs/engine-usage/sqoop.md                         |   2 +-
 docs/engine-usage/trino.md                         |   2 +-
 docs/feature/overview.md                           |   2 +-
 download/release-notes-1.3.2.md                    |   4 +-
 .../current/release-notes-1.3.2.md                 |   4 +-
 .../current/deployment/deploy-quick.md             |  58 +++
 .../current/engine-usage/elasticsearch.md          |   2 +-
 .../current/engine-usage/flink.md                  |   2 +-
 .../current/engine-usage/jdbc.md                   |   2 +-
 .../current/engine-usage/openlookeng.md            |   2 +-
 .../current/engine-usage/pipeline.md               |   2 +-
 .../current/engine-usage/presto.md                 |   2 +-
 .../current/engine-usage/seatunnel.md              |   2 +-
 .../current/engine-usage/spark.md                  | 543 +--------------------
 .../current/engine-usage/sqoop.md                  |   2 +-
 .../current/engine-usage/trino.md                  |   2 +-
 static/Images-zh/deployment/token-list.png         | Bin 0 -> 122086 bytes
 static/Images/deployment/token-list.png            | Bin 0 -> 121631 bytes
 28 files changed, 146 insertions(+), 1131 deletions(-)

diff --git a/docs/about/release.md b/docs/about/release.md
deleted file mode 100644
index 5e50fc0bd4..0000000000
--- a/docs/about/release.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: Version Overview
-sidebar_position: 2
----
-
-- [Supports spark task submission Jar package function](/engine-usage/spark.md)
-- [Supports the Spark task to parse json configurations and perform ETL operations](/engine-usage/spark.md)
-- [Supports loading specific UDFs by UDF ID](/user-guide/control-panel/udf-function.md)
-- [Version Release-Notes](/download/release-notes-1.3.2)
-
-
-## Parameter changes
-
-| module name (service name) | type | parameter name | default value | description |
-| ----------- | ----- | ------------------------- | ---------------- |-------------- |
-| mg-eureka | new | eureka.instance.metadata-map.linkis.app.version  | ${linkis.app.version} | Eureka metadata Report Linkis application version information|
-| mg-eureka | new | eureka.instance.metadata-map.linkis.conf.version | none | Eureka metadata Reports the Linkis service version |
-| mg-eureka | update | eureka.client.registry-fetch-interval-seconds | 8 | Interval for the Eureka Client to retrieve service registration information (seconds) |
-| mg-eureka | new | eureka.instance.lease-renewal-interval-in-seconds | 4 | Frequency at which the eureka client sends heartbeat messages to the server (in seconds) |
-| mg-eureka | new | eureka.instance.lease-expiration-duration-in-seconds | 12 | eureka timeout for waiting for the next heartbeat (seconds) |
-
-## Database table changes
-For details, see the upgrade schema `db/upgrade/1.3.2_schema` file in the corresponding branch of the code warehouse (https://github.com/apache/linkis)
diff --git a/docs/deployment/deploy-quick.md b/docs/deployment/deploy-quick.md
index e6ce809b4f..4386b0fa3c 100644
--- a/docs/deployment/deploy-quick.md
+++ b/docs/deployment/deploy-quick.md
@@ -210,6 +210,64 @@ HADOOP_KERBEROS_ENABLE=true
 HADOOP_KEYTAB_PATH=/appcom/keytab/
 ```
 
+### 2.4 Configure Token
+
+The original default Token of Linkis is fixed and the length is too short, which has security risks. Therefore, Linkis 1.3.2 changes the original fixed Token to random generation and increases the Token length.
+
+New Token format: application abbreviation - 32-bit random number, such as BML-928a721518014ba4a28735ec2a0da799.
+
+Token may be used in the Linkis service itself, such as executing tasks through Shell, uploading BML, etc., or it may be used in other applications, such as DSS, Qualitis and other applications to access Linkis.
+
+#### View Token
+**View via SQL statement**
+```sql
+select * from linkis_mg_gateway_auth_token;
+```
+**View via Admin Console**
+
+Log in to the management console -> basic data management -> token management
+![](/Images/deployment/token-list.png)
+
+#### Check Token configuration
+
+When the Linkis service itself uses Token, the Token in the configuration file must be consistent with the Token in the database. Match by applying the short name prefix.
+
+$LINKIS_HOME/conf/linkis.properites file Token configuration
+
+```
+linkis.configuration.linkisclient.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.bml.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.context.client.auth.value=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.errorcode.auth.token=BML-928a721518014ba4a28735ec2a0da799
+
+wds.linkis.client.test.common.tokenValue=LINKIS_CLI-215af9e265ae437ca1f070b17d6a540d
+
+wds.linkis.filesystem.token.value=WS-52bce72ed51741c7a2a9544812b45725
+wds.linkis.gateway.access.token=WS-52bce72ed51741c7a2a9544812b45725
+
+wds.linkis.server.dsm.auth.token.value=DSM-65169e8e1b564c0d8a04ee861ca7df6e
+```
+
+$LINKIS_HOME/conf/linkis-cli/linkis-cli.properties file Token configuration
+```
+wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
+```
+
+#### Notice
+
+**Full installation**
+
+For the full installation of the new version of Linkis, the install.sh script will automatically process the configuration file and keep the database Token consistent. Therefore, the Token of the Linkis service itself does not need to be modified. Each application can query and use the new token through the management console.
+
+**version upgrade**
+
+When the version is upgraded, the database Token is not modified, so there is no need to modify the configuration file and application Token.
+
+**Token expiration problem**
+
+There is problem of token is not valid or stale, you can check whether the Token is configured correctly, and you can query the Token through the management console.
+
 ## 3. Install and start
 
 ### 3.1 Execute the installation script:
@@ -440,22 +498,22 @@ $ tree linkis-package/lib/linkis-engineconn-plugins/ -L 3
 linkis-package/lib/linkis-engineconn-plugins/
 ├── hive
 │ ├── dist
-│ │ └── v2.3.3 #version is 2.3.3 engineType is hive-2.3.3
+│ │ └── 2.3.3 #version is 2.3.3 engineType is hive-2.3.3
 │ └── plugin
 │ └── 2.3.3
 ├── python
 │ ├── dist
-│ │ └── vpython2
+│ │ └── python2
 │ └── plugin
 │ └── python2 #version is python2 engineType is python-python2
 ├── shell
 │ ├── dist
-│ │ └── v1
+│ │ └── 1
 │ └── plugin
 │ └── 1
 └── spark
     ├── dist
-    │ └── v2.4.3
+    │ └── 2.4.3
     └── plugin
         └── 2.4.3
 ````
diff --git a/docs/engine-usage/elasticsearch.md b/docs/engine-usage/elasticsearch.md
index 8edd3f282e..4e950458ae 100644
--- a/docs/engine-usage/elasticsearch.md
+++ b/docs/engine-usage/elasticsearch.md
@@ -66,7 +66,7 @@ The directory structure after uploading is as follows
 linkis-engineconn-plugins/
 ├── elasticsearch
 │   ├── dist
-│ │ └── v7.6.2
+│ │ └── 7.6.2
 │   │       ├── conf
 │ │ └── lib
 │   └── plugin
diff --git a/docs/engine-usage/flink.md b/docs/engine-usage/flink.md
index 71020f2a00..cd876ef841 100644
--- a/docs/engine-usage/flink.md
+++ b/docs/engine-usage/flink.md
@@ -62,7 +62,7 @@ The directory structure after uploading is as follows
 linkis-engineconn-plugins/
 ├── big
 │   ├── dist
-│ │ └── v1.12.2
+│ │ └── 1.12.2
 │   │       ├── conf
 │ │ └── lib
 │   └── plugin
diff --git a/docs/engine-usage/jdbc.md b/docs/engine-usage/jdbc.md
index 536a6c6796..da7e3665e6 100644
--- a/docs/engine-usage/jdbc.md
+++ b/docs/engine-usage/jdbc.md
@@ -63,7 +63,7 @@ The directory structure after uploading is as follows
 linkis-engineconn-plugins/
 ├── jdbc
 │   ├── dist
-│ │ └── v4
+│ │ └── 4
 │   │       ├── conf
 │ │ └── lib
 │   └── plugin
diff --git a/docs/engine-usage/openlookeng.md b/docs/engine-usage/openlookeng.md
index fd54e634d2..cfc29676cd 100644
--- a/docs/engine-usage/openlookeng.md
+++ b/docs/engine-usage/openlookeng.md
@@ -70,7 +70,7 @@ The directory structure after uploading is as follows
 linkis-engineconn-plugins/
 ├── openlookeng
 │   ├── dist
-│ │ └── v1.5.0
+│ │ └── 1.5.0
 │   │       ├── conf
 │ │ └── lib
 │   └── plugin
diff --git a/docs/engine-usage/pipeline.md b/docs/engine-usage/pipeline.md
index 2ff03828f1..b1fdbbcf55 100644
--- a/docs/engine-usage/pipeline.md
+++ b/docs/engine-usage/pipeline.md
@@ -34,7 +34,7 @@ The directory structure after uploading is as follows
 linkis-engineconn-plugins/
 ├── pipeline
 │   ├── dist
-│ │ └── v1
+│ │ └── 1
 │   │       ├── conf
 │ │ └── lib
 │   └── plugin
diff --git a/docs/engine-usage/presto.md b/docs/engine-usage/presto.md
index 37045635a2..3da7cd73d2 100644
--- a/docs/engine-usage/presto.md
+++ b/docs/engine-usage/presto.md
@@ -69,7 +69,7 @@ The directory structure after uploading is as follows
 linkis-engineconn-plugins/
 ├── soon
 │   ├── dist
-│ │ └── v0.234
+│ │ └── 0.234
 │   │       ├── conf
 │ │ └── lib
 │   └── plugin
diff --git a/docs/engine-usage/seatunnel.md b/docs/engine-usage/seatunnel.md
index 68917ac4d2..d87b9bb33f 100644
--- a/docs/engine-usage/seatunnel.md
+++ b/docs/engine-usage/seatunnel.md
@@ -68,7 +68,7 @@ The directory structure after uploading is as follows
 linkis-engineconn-plugins/
 ├── seat tunnel
 │ ├── dist
-│ │ └── v2.1.2
+│ │ └── 2.1.2
 │ │ ├── conf
 │ │ └── lib
 │ └── plugin
diff --git a/docs/engine-usage/spark.md b/docs/engine-usage/spark.md
index 93dc6f9b5e..afea35669f 100644
--- a/docs/engine-usage/spark.md
+++ b/docs/engine-usage/spark.md
@@ -270,545 +270,4 @@ insert into `linkis_ps_configuration_config_value` (`config_key_id`, `config_val
 (select `relation`.`config_key_id` AS `config_key_id`, '' AS `config_value`, `relation`.`engine_type_label_id` AS `config_label_id` FROM linkis_ps_configuration_key_engine_relation relation
 INNER JOIN linkis_cg_manager_label label ON relation.engine_type_label_id = label.id AND label.label_value = @SPARK_ALL);
 
-```
-
-## 5. introduction for data_calc(data calculate)
-
-Spark ETL operation by parsing json, and the design:
-
-![data_calc](/Images/EngineUsage/data_calc.svg)
-
-data_calc json configuration, tow mode:*array*(:default) and *group*
-
-The type of plugin: source, transformation, sink
-
-**data_calc example**
-
-```
-POST http://localhost:8087/api/rest_j/v1/entrance/submit
-Content-Type: application/json
-Token-Code: dss-AUTH
-Token-User: linkis
-
-{
-  "executionContent": {
-    // Code is the escaped json. See the following for detailed configuration instructions
-    "code": "{\"plugins\":[{\"type\":\"source\",\"name\":\"jdbc\",\"config\":{\"resultTable\":\"spark_source_01\",\"url\":\"jdbc:mysql://localhost:3306/\",\"driver\":\"com.mysql.jdbc.Driver\",\"user\":\"xi_root\",\"password\":\"123456\",\"query\":\"select * from linkis.linkis_cg_manager_label\",\"options\":{}}},{\"type\":\"transformation\",\"name\":\"sql\",\"config\":{\"resultTable\":\"spark_transform_01\",\"sql\":\"select * from spark_source_01 limit 100\"}},{\"type\":\"sink\",\"name\": [...]
-    "runType": "data_calc"
-  },
-  "params": {
-    "variable": {},
-    "configuration": {
-      // Startup parameter
-      "startup": {
-        "spark.executor.memory": "1g",
-        "spark.driver.memory": "1g",
-        "spark.executor.cores": "1",
-        "spark.executor.instances": 1
-      }
-    }
-  },
-  "labels": {
-    "engineType": "spark-2.4.3",
-    "userCreator": "linkis-IDE"
-  }
-}
-```
-
-### 5.1 Mode Array
-
-Plugin has 3 field, name is the plugin's name, type is the plugin's type, config is the plugin's configuration
-
-```json
-{
-    "plugins": [
-        {
-            "type": "source",
-            "name": "jdbc",
-            "config": {
-                "resultTable": "spark_source_01",
-                "url": "jdbc:mysql://localhost:3306/",
-                "driver": "com.mysql.jdbc.Driver",
-                "user": "xi_root",
-                "password": "123456",
-                "query": "select * from linkis.linkis_cg_manager_label",
-                "options": {
-                }
-            }
-        },
-        {
-            "type": "transformation",
-            "name": "sql",
-            "config": {
-                "resultTable": "spark_transform_01",
-                "sql": "select * from spark_source_01 limit 100"
-            }
-        },
-        {
-            "type": "sink",
-            "name": "file",
-            "config": {
-                "sourceTable": "spark_transform_01",
-                "path": "hdfs:///tmp/data/testjson",
-                "serializer": "json",
-                "partitionBy": [
-                    "label_key"
-                ],
-                "saveMode": "overwrite"
-            }
-        }
-    ]
-}
-
-```
-
-### 5.2 Mode Group
-
-Plugin has 2 field, name is the plugin's name, config is the plugin's configuration
-
-The configuration is divided into three parts:
-
-1. sources:Config data source
-2. transformations:Config data transformations
-3. sinks:Config data sink
-
-```json
-{
-    "sources": [
-        {
-            "name": "jdbc",
-            "config": {
-                "resultTable": "spark_source_table_00001",
-                "url": "jdbc:mysql://localhost:3306/",
-                "driver": "com.mysql.jdbc.Driver",
-                "user": "test_db_rw",
-                "password": "123456",
-                "query": "select * from test_db.test_table",
-                "options": {
-                }
-            }
-        },
-        {
-           "name": "file",
-           "config": {
-              "resultTable": "spark_source_table_00002",
-              "path": "hdfs:///data/tmp/testfile.csv",
-              "serializer": "csv",
-              "columnNames": ["type", "name"],
-              "options": {
-              }
-           }
-        }
-    ],
-    "transformations": [
-        {
-            "name": "sql",
-            "config": {
-                "resultTable": "spark_transform_00001",
-                "sql": "select * from spark_source_table_00001 t1 join spark_source_table_00002 t2 on t1.type=t2.type where t1.id > 100 limit 100"
-            }
-        }
-    ],
-    "sinks": [
-        {
-            "name": "file",
-            "config": {
-                "sourceTable": "spark_transform_00001",
-                "path": "hdfs:///tmp/data/test_json",
-                "serializer": "json",
-                "partitionBy": [
-                    "label_key"
-                ],
-                "saveMode": "overwrite"
-            }
-        }
-    ]
-}
-```
-
-### 5.3 Plugin type description
-
-#### 5.3.1 Source configuration
-
-Corresponding data reading operations can read files from files and jdbc. * * Only the current source table can be used, and the temporary table registered before the current configuration cannot be used**
-
-**Public configuration**
-
-| **field name**   | **introduction**                                                     | **field type**        | **required** | **default**      |
-| ------------ | ------------------------------------------------------------ | ------------------- | ------------ | --------------- |
-| resultTable  | register table name for `transform` / `sink`                         | String              | Yes           | -               |
-| persist      | spark persist                                                     | Boolean             | No           | false           |
-| storageLevel | spark storageLevel                                                     | String              | No           | MEMORY_AND_DISK |
-| options      | [spark official](https://spark.apache.org/docs/latest/sql-data-sources.html) | Map<String, String> | No           |                 |
-
-##### 5.3.1.1 file
-
-**Fields**
-
-| **field name**   | **introduction**    | **field type**        | **required** | **default**      |
-| ----------- | ------------------------ | ------------ | ------------ | ---------- |
-| path        | File path, default is `hdfs`                 | String       | Yes           | -          |
-| serializer  | file format, default is   `parquet`          | String       | Yes           | `parquet`    |
-| columnNames | Mapped field name             | String[]     | No           | -          |
-
-**Exemples**
-
-```JSON
-{
-    "name": "file",
-    "config": {
-        "resultTable": "spark_source_table_00001",
-        "path": "hdfs:///data/tmp/test_csv_/xxx.csv",
-        "serializer": "csv",
-        "columnNames": ["id", "name"],
-        "options": {
-            "key": "value"
-        }
-    }
-}
-```
-
-##### 5.3.1.2 jdbc
-
-**Fields**
-
-| **field name**   | **introduction**                                                     | **field type**        | **required** | **default**      |
-| ---------- | ---------------------------------------------------- | ------------ | ------------ | ---------- |
-| url        | jdbc url                                 | String       | Yes           | -          |
-| driver     | driver class                                | String       | Yes           | -          |
-| user       | username                                               | String       | Yes           | -          |
-| password   | password                                                 | String       | Yes           | -          |
-| query      | The functions used in the query must be supported by the datasource | String       | Yes           | -          |
-
-**Exemples**
-
-```JSON
-{
-    "name": "jdbc",
-    "config": {
-        "resultTable": "spark_source_table_00001",
-        "url": "jdbc:mysql://localhost:3306/",
-        "driver": "com.mysql.jdbc.Driver",
-        "user": "local_root",
-        "password": "123456",
-        "query": "select a.xxx, b.xxxx from test_table where id > 100",
-        "options": {
-            "key": "value"
-        }
-    }
-}
-```
-
-##### 5.3.1.3 managed_jdbc
-
-Data source configured in Linkis. The data source connection information will be obtained from linkis and converted to jdbc for execution.
-
-**Fields**
-
-| **field name**   | **introduction**     | **field type**        | **required** | **default**      |
-| ---------- |----------------------------| ------------ | ------------ | ---------- |
-| datasource | Data source name configured in Linkis                      | String       | Yes           | -          |
-| query      | The functions used in the query must be supported by the selected datasource | String       | Yes           | -          |
-
-**Exemples**
-
-```JSON
-{
-    "name": "jdbc",
-    "config": {
-        "datasource": "mysql_test_db",
-        "query": "select a.xxx, b.xxxx from table where id > 100",
-        "options": {
-            "key": "value"
-        }
-    }
-}
-```
-
-#### 5.3.2 Transform Configuration
-
-The logic of data transformation, **All `resultTable` registered before the current configuration can be used**
-
-**Public configuration**
-
-| **field name**   | **introduction**                    | **field type**        | **required** | **default**      |
-| ------------ | ---------------------------------------------- | ------------ | ------------ | --------------- |
-| sourceTable  | `resultTable` registered by the above `source` / `transform` | String       | No           | -               |
-| resultTable  | Register `resultTable` for the below `transform` / `sink`            | String       | Yes           | -               |
-| persist      | spark persist                                                     | Boolean             | No           | false           |
-| storageLevel | spark storageLevel                                                     | String              | No           | MEMORY_AND_DISK |
-
-##### 5.3.2.1 sql
-
-**Fields**
-
-| **field name**   | **introduction**                    | **field type**        | **required** | **default**      |
-| ---------- | ------------------------------------------------------------ | ------------ | ------------ | ---------- |
-| sql        | `resultTable` registered by the above `source` / `transform` can be used | String       | Yes           | -          |
-
-**Exemples**
-
-```JSON
-{
-    "name": "sql",
-    "config": {
-        "resultTable": "spark_transform_table_00001",
-        "sql": "select * from ods.ods_test_table as a join spark_source_table_00001 as b on a.vin=b.vin",
-        "cache": true
-    }
-}
-```
-
-#### 5.3.3 Sink Configuration
-
-Write the `resultTable` to a file or table, **All `resultTable` registered before the current configuration can be used**
-
-**Public configuration**
-
-| **field name**   | **introduction**                    | **field type**        | **required** | **default**      |
-| ------------------------- | ------------------------------------------------------------ | ------------------- | ------------ | ------------------------------------------------------------ |
-| sourceTable / sourceQuery | `resultTable` from `soruce` / `transform` or select sql, which will be written         | String              | No           | sourceTable 和 sourceQuery 必须有一个不为空 sourceQuery 优先级更高 |
-| options                   | [spark official](https://spark.apache.org/docs/latest/sql-data-sources.html) | Map<String, String> | No           |                                                              |
-| variables                 | Variables can be replaced in file, example: `dt="${day}"`                                   | Map<String, String> | No           | {    "dt": "${day}",     "hour": "${hour}", }                |
-
-##### 5.3.3.1 hive
-
-**Fields**
-
-| **field name**   | **introduction**                    | **field type**        | **required** | **default**      |
-| -------------- | ------------------------------------------------------------ | ------------ | ------------ | ---------- |
-| targetDatabase | The database of the table to be written                        | String       | Yes           | -          |
-| targetTable    | The table to be written                                        | String       | Yes           | -          |
-| saveMode       | Write mode, refer to Spark, the default is `overwrite`          | String       | Yes           | `overwrite`    |
-| strongCheck    | field name, field order, field type must be same                  | Boolean      | No           | true       |
-| writeAsFile    | Write as a file, it can improve efficiency. All partition variables should be in `variables` | Boolean      | No           | false      |
-| numPartitions  | Number of partitions, `Dataset.repartition`                                | Integer      | No           | 10         |
-
-**Exemples**
-
-```JSON
-{
-    "name": "hive",
-    "config": {
-        "sourceTable": "spark_transform_table_00001",
-        "targetTable": "dw.result_table",
-        "saveMode": "append",
-        "options": {
-            "key": "value"
-        }
-    }
-}
-```
-
-##### 5.3.3.2 jdbc
-
-**Fields**
-
-| **field name**   | **introduction**                    | **field type**        | **required** | **default**      |
-| -------------- |--------------------------| ------------ | ------------ | ---------- |
-| url            | jdbc url               | String       | Yes           | -          |
-| driver         | driver class(fully-qualified name)               | String       | Yes           | -          |
-| user           | username                      | String       | Yes           | -          |
-| password       | password                       | String       | Yes           | -          |
-| targetDatabase | The database of the table to be written             | String       | No           | -          |
-| targetTable    | The table to be written                       | String       | Yes           | -          |
-| preQueries     | SQL executed before writing                 | String[]     | No           | -          |
-| numPartitions  | Number of partitions, `Dataset.repartition` | Integer      | No           | 10         |
-
-**Exemples**
-
-```JSON
-{
-    "name": "jdbc",
-    "config": {
-        "sourceTable": "spark_transform_table_00001",
-        "database": "mysql_test_db",
-        "targetTable": "test_001",
-        "preQueries": ["delete from test_001 where dt='${exec_date}'"],
-        "options": {
-            "key": "value"
-        }
-    }
-}
-```
-
-##### 5.3.3.3 managed_jdbc
-
-Data source configured in Linkis. The data source connection information will be obtained from linkis and converted to jdbc for execution.
-
-**Fields**
-
-| **field name**   | **introduction**                    | **field type**        | **required** | **default**      |
-|------------------|--------------------------| ------------ | ------------ | ---------- |
-| targetDatasource | Data source name configured in Linkis           | String       | No           | -          |
-| targetDatabase   | The database of the table to be written            | String       | No           | -          |
-| targetTable      | The table to be written                  | String       | Yes           | -          |
-| preQueries       | SQL executed before writing              | String[]     | No           | -          |
-| numPartitions    | Number of partitions, `Dataset.repartition` | Integer      | No           | 10         |
-
-**Exemples**
-
-```JSON
-{
-    "name": "jdbc",
-    "config": {
-        "targetDatasource": "spark_transform_table_00001",
-        "targetDatabase": "mysql_test_db",
-        "targetTable": "test_001",
-        "preQueries": ["delete from test_001 where dt='${exec_date}'"],
-        "options": {
-            "key": "value"
-        }
-    }
-}
-```
-
-##### 5.3.3.4 file
-
-**Fields**
-
-| **field name**   | **introduction**                        | **field type**        | **required** | **default**      |
-| ----------- | -------------------------------------------- | ------------ | ------------ | ---------- |
-| path        | File path, default is `hdfs`                 | String       | Yes           | -          |
-| serializer  | file format, default is   `parquet`          | String       | Yes           | `parquet`    |
-| partitionBy |                                              | String[]     | No           |            |
-| saveMode    | Write mode, refer to Spark, the default is `overwrite` | String       | No           |   `overwrite`  |
-
-**Exemples**
-
-```JSON
-{
-    "name": "file",
-    "config": {
-        "sourceTable": "spark_transform_table_00001",
-        "path": "hdfs:///data/tmp/test_json/",
-        "serializer": "json", 
-        "variables": {
-            "key": "value"
-        },
-        "options": {
-            "key": "value"
-        }
-    }
-}
-```
-
-### 5.4 Exemples
-
-1. Read from jdbc, write to hive
-
-```json
-{
-    "sources": [
-        {
-            "name": "jdbc",
-            "config": {
-                "resultTable": "spark_source_0001",
-                "database": "mysql_test_db",
-                "query": "select * from test_table limit 100"
-            }
-        }
-    ],
-    "transformations": [
-    ],
-    "sinks": [
-        {
-            "name": "hive",
-            "config": {
-                "sourceTable": "spark_source_0001",
-                "targetTable": "ods.ods_test_table",
-                "saveMode": "overwrite"
-            }
-        }
-    ]
-}
-```
-
-2. Read from hive, write to hive
-
-```json
-{
-    "sources": [
-    ],
-    "transformations": [
-        {
-            "name": "sql",
-            "config": {
-                "resultTable": "spark_transform_00001",
-                "sql": "select * from ods.ods_test_table limit 100"
-            }
-        }
-    ],
-    "sinks": [
-        {
-            "name": "hive",
-            "config": {
-                "sourceTable": "spark_transform_00001",
-                "targetTable": "dw.dw_test_table",
-                "saveMode": "overwrite"
-            }
-        }
-    ]
-}
-```
-
-3. Read from file, write to hive
-
-```json
-{
-    "sources": [
-        {
-            "name": "file",
-            "config": {
-                "path": "hdfs:///data/tmp/test_csv/",
-                "resultTable": "spark_file_0001",
-                "serializer": "csv",
-                "columnNames": ["col1", "col2", "col3"],
-                "options": {
-                    "delimiter": ",",
-                    "header", "false"
-                }
-            }
-        }
-    ],
-    "transformations": [
-    ],
-    "sinks": [
-        {
-            "name": "hive",
-            "config": {
-                "sourceQuery": "select col1, col2 from spark_file_0001",
-                "targetTable": "ods.ods_test_table",
-                "saveMode": "overwrite",
-                "variables": {
-                }
-            }
-        }
-    ]
-}
-```
-
-4. Read from hive, write to jdbc
-
-```json
-{
-    "sources": [
-    ],
-    "transformations": [
-    ],
-    "sinks": [
-        {
-            "name": "jdbc",
-            "config": {
-                "sourceQuery": "select * from dm.dm_result_table where dt=${exec_date}",
-                "database": "mysql_test_db",
-                "targetTable": "mysql_test_table",
-                "preQueries": ["delete from mysql_test_table where dt='${exec_date}'"],
-                "options": {
-                    "key": "value"
-                }
-            }
-        }
-    ]
-}
 ```
\ No newline at end of file
diff --git a/docs/engine-usage/sqoop.md b/docs/engine-usage/sqoop.md
index 02bed8e020..32d243de6f 100644
--- a/docs/engine-usage/sqoop.md
+++ b/docs/engine-usage/sqoop.md
@@ -74,7 +74,7 @@ The directory structure after uploading is as follows
 linkis-engineconn-plugins/
 ├── sqoop
 │ ├── dist
-│ │ └── v1.4.6
+│ │ └── 1.4.6
 │ │ ├── conf
 │ │ └── lib
 │ └── plugin
diff --git a/docs/engine-usage/trino.md b/docs/engine-usage/trino.md
index 0c4c7dfff9..fef924e032 100644
--- a/docs/engine-usage/trino.md
+++ b/docs/engine-usage/trino.md
@@ -69,7 +69,7 @@ The directory structure after uploading is as follows
 linkis-engineconn-plugins/
 ├── triune
 │   ├── dist
-│ │ └── v371
+│ │ └── 371
 │   │       ├── conf
 │ │ └── lib
 │   └── plugin
diff --git a/docs/feature/overview.md b/docs/feature/overview.md
index 3b0a5fe820..535dfebc35 100644
--- a/docs/feature/overview.md
+++ b/docs/feature/overview.md
@@ -1,5 +1,5 @@
 --- 
-title: Version Overview 
+title: Version Feature 
 sidebar_position: 0.1 
 --- 
 
diff --git a/download/release-notes-1.3.2.md b/download/release-notes-1.3.2.md
index c9dfc3c8b6..e49f17b878 100644
--- a/download/release-notes-1.3.2.md
+++ b/download/release-notes-1.3.2.md
@@ -9,9 +9,11 @@ Linkis 1.3.2 mainly enhanced Spark engine and added the function of ETL through
 
 The main functions are as follows:
 
-- Added Spark's ETL function using json. Support for reading and writing data from JDBC data sources configured in Linkis (including MySQL, PostgreSQL, SqlServer, Oracle, DB2, TiDB, ClickHouse, Doris)
 - Added the function for Spark to submit Jar packages
 - Allows the UDF to be loaded using the specified UDF ID configured in the background
+- Support for multi-task fixed EC execution
+- Support Eureka for reporting version metadata
+- Linkis integrates the OceanBase database
 
 Abbreviations:
 - ORCHESTRATOR: Linkis Orchestrator
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/release-notes-1.3.2.md b/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/release-notes-1.3.2.md
index e5095bcef8..45c9a8498b 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/release-notes-1.3.2.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/release-notes-1.3.2.md
@@ -9,9 +9,11 @@ Linkis 1.3.2 版本,主要对 Spark 引擎进行了增强,添加了通过 js
 
 主要功能如下:
 
-- 新增 Spark 通过 json 进行 ETL 的功能,支持通过 Linkis 中配置的 JDBC 数据源读写数据(包括 MySQL、PostgreSQL、SqlServer、Oracle、DB2、TiDB、ClickHouse、Doris)
 - 新增 Spark 提交 Jar 包的功能
 - 允许通过指定后台配置的 UDF ID 加载对应的 UDF
+- 支持多任务固定 EC 执行
+- 支持 Eureka 上报版本元数据
+- Linkis 整合 OceanBase 数据库
 
 缩写:
 - ORCHESTRATOR: Linkis Orchestrator
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md
index fae35c6a20..38fc13247c 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md
@@ -209,6 +209,64 @@ HADOOP_KERBEROS_ENABLE=true
 HADOOP_KEYTAB_PATH=/appcom/keytab/
 ```
 
+### 2.4 配置 Token
+
+Linkis 原有默认 Token 固定且长度太短存在安全隐患。因此 Linkis 1.3.2 将原有固定 Token 改为随机生成,并增加 Token 长度。
+
+新 Token 格式:应用简称-32 位随机数,如BML-928a721518014ba4a28735ec2a0da799。
+
+Token 可能在 Linkis 服务自身使用,如通过 Shell 方式执行任务、BML 上传等,也可能在其它应用中使用,如 DSS、Qualitis 等应用访问 Linkis。
+
+#### 查看 Token
+**通过 SQL 语句查看**
+```sql
+select * from linkis_mg_gateway_auth_token;
+```
+**通过管理台查看**
+
+登录管理台 -> 基础数据管理 -> 令牌管理 
+![](/Images-zh/deployment/token-list.png)
+
+#### 检查 Token 配置
+
+Linkis 服务本身使用 Token 时,配置文件中 Token 需与数据库中 Token 一致。通过应用简称前缀匹配。
+
+$LINKIS_HOME/conf/linkis.properites文件 Token 配置
+
+```
+linkis.configuration.linkisclient.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.bml.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.context.client.auth.value=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.errorcode.auth.token=BML-928a721518014ba4a28735ec2a0da799
+
+wds.linkis.client.test.common.tokenValue=LINKIS_CLI-215af9e265ae437ca1f070b17d6a540d
+
+wds.linkis.filesystem.token.value=WS-52bce72ed51741c7a2a9544812b45725
+wds.linkis.gateway.access.token=WS-52bce72ed51741c7a2a9544812b45725
+
+wds.linkis.server.dsm.auth.token.value=DSM-65169e8e1b564c0d8a04ee861ca7df6e
+```
+
+$LINKIS_HOME/conf/linkis-cli/linkis-cli.properties文件 Token 配置
+```
+wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
+```
+
+#### 注意事项
+
+**全量安装**
+
+对于全量安装新版本 Linkis 时, install.sh 脚本中会自动处理配置文件和数据库 Token 保持一致。因此 Linkis 服务自身 Token 无需修改。各应用可通过管理台查询并使用新 Token。
+
+**版本升级**
+
+版本升级时,数据库 Token 并未修改,因此无需修改配置文件和应用 Token。
+
+**Token 过期问题**
+
+当遇到 Token 令牌无效或已过期问题时可以检查 Token 是否配置正确,可通过管理台查询 Token。
+
 ## 3. 安装和启动
 
 ### 3.1 执行安装脚本:
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/elasticsearch.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/elasticsearch.md
index 43e1fcead7..3a13163256 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/elasticsearch.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/elasticsearch.md
@@ -66,7 +66,7 @@ ${LINKIS_HOME}/lib/linkis-engineplugins
 linkis-engineconn-plugins/
 ├── elasticsearch
 │   ├── dist
-│   │   └── v7.6.2
+│   │   └── 7.6.2
 │   │       ├── conf
 │   │       └── lib
 │   └── plugin
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/flink.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/flink.md
index 2c1b85d3d4..30059bf68d 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/flink.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/flink.md
@@ -62,7 +62,7 @@ ${LINKIS_HOME}/lib/linkis-engineplugins
 linkis-engineconn-plugins/
 ├── flink
 │   ├── dist
-│   │   └── v1.12.2
+│   │   └── 1.12.2
 │   │       ├── conf
 │   │       └── lib
 │   └── plugin
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/jdbc.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/jdbc.md
index 1301ba1582..ea4fcb90bb 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/jdbc.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/jdbc.md
@@ -63,7 +63,7 @@ ${LINKIS_HOME}/lib/linkis-engineplugins
 linkis-engineconn-plugins/
 ├── jdbc
 │   ├── dist
-│   │   └── v4
+│   │   └── 4
 │   │       ├── conf
 │   │       └── lib
 │   └── plugin
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/openlookeng.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/openlookeng.md
index 214c8a73dc..8ab7a50fb4 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/openlookeng.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/openlookeng.md
@@ -69,7 +69,7 @@ ${LINKIS_HOME}/lib/linkis-engineplugins
 linkis-engineconn-plugins/
 ├── openlookeng
 │   ├── dist
-│   │   └── v1.5.0
+│   │   └── 1.5.0
 │   │       ├── conf
 │   │       └── lib
 │   └── plugin
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/pipeline.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/pipeline.md
index 9a1b8f853b..09237e0bc2 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/pipeline.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/pipeline.md
@@ -34,7 +34,7 @@ ${LINKIS_HOME}/lib/linkis-engineplugins
 linkis-engineconn-plugins/
 ├── pipeline
 │   ├── dist
-│   │   └── v1
+│   │   └── 1
 │   │       ├── conf
 │   │       └── lib
 │   └── plugin
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/presto.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/presto.md
index 16e9fddc6f..9e51f83951 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/presto.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/presto.md
@@ -69,7 +69,7 @@ ${LINKIS_HOME}/lib/linkis-engineplugins
 linkis-engineconn-plugins/
 ├── presto
 │   ├── dist
-│   │   └── v0.234
+│   │   └── 0.234
 │   │       ├── conf
 │   │       └── lib
 │   └── plugin
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/seatunnel.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/seatunnel.md
index a068751df7..15322ae414 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/seatunnel.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/seatunnel.md
@@ -68,7 +68,7 @@ ${LINKIS_HOME}/lib/linkis-engineplugins
 linkis-engineconn-plugins/
 ├── seatunnel
 │   ├── dist
-│   │   └── v2.1.2
+│   │   └── 2.1.2
 │   │       ├── conf
 │   │       └── lib
 │   └── plugin
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/spark.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/spark.md
index 3bda5e6752..c302459715 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/spark.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/spark.md
@@ -269,545 +269,4 @@ insert into `linkis_ps_configuration_config_value` (`config_key_id`, `config_val
 (select `relation`.`config_key_id` AS `config_key_id`, '' AS `config_value`, `relation`.`engine_type_label_id` AS `config_label_id` FROM linkis_ps_configuration_key_engine_relation relation
 INNER JOIN linkis_cg_manager_label label ON relation.engine_type_label_id = label.id AND label.label_value = @SPARK_ALL);
 
-```
-
-## 5. data_calc(数据计算,data calculate)功能说明
-
-spark 解析 json 配置进行 ETL 操作的功能,整体设计如下图
-
-![data_calc](/Images-zh/EngineUsage/data_calc.svg)
-
-data_calc json 配置文件内容,有两种配置模式:数组模式和分组模式,默认为数组模式
-
-插件类型包括 source,transformation,sink
-
-**data_calc 调用样例**
-
-```
-POST http://localhost:8087/api/rest_j/v1/entrance/submit
-Content-Type: application/json
-Token-Code: dss-AUTH
-Token-User: linkis
-
-{
-  "executionContent": {
-    // code 为转译之后的 json,具体配置说明详见下文
-    "code": "{\"plugins\":[{\"type\":\"source\",\"name\":\"jdbc\",\"config\":{\"resultTable\":\"spark_source_01\",\"url\":\"jdbc:mysql://localhost:3306/\",\"driver\":\"com.mysql.jdbc.Driver\",\"user\":\"xi_root\",\"password\":\"123456\",\"query\":\"select * from linkis.linkis_cg_manager_label\",\"options\":{}}},{\"type\":\"transformation\",\"name\":\"sql\",\"config\":{\"resultTable\":\"spark_transform_01\",\"sql\":\"select * from spark_source_01 limit 100\"}},{\"type\":\"sink\",\"name\": [...]
-    "runType": "data_calc"
-  },
-  "params": {
-    "variable": {},
-    "configuration": {
-      // 启动参数设置
-      "startup": {
-        "spark.executor.memory": "1g",
-        "spark.driver.memory": "1g",
-        "spark.executor.cores": "1",
-        "spark.executor.instances": 1
-      }
-    }
-  },
-  "labels": {
-    "engineType": "spark-2.4.3",
-    "userCreator": "linkis-IDE"
-  }
-}
-```
-
-### 5.1 数组模式
-
-插件有三个字段,name 为插件名,type 为插件类型,config 为具体配置
-
-```json
-{
-    "plugins": [
-        {
-            "type": "source",
-            "name": "jdbc",
-            "config": {
-                "resultTable": "spark_source_01",
-                "url": "jdbc:mysql://localhost:3306/",
-                "driver": "com.mysql.jdbc.Driver",
-                "user": "xi_root",
-                "password": "123456",
-                "query": "select * from linkis.linkis_cg_manager_label",
-                "options": {
-                }
-            }
-        },
-        {
-            "type": "transformation",
-            "name": "sql",
-            "config": {
-                "resultTable": "spark_transform_01",
-                "sql": "select * from spark_source_01 limit 100"
-            }
-        },
-        {
-            "type": "sink",
-            "name": "file",
-            "config": {
-                "sourceTable": "spark_transform_01",
-                "path": "hdfs:///tmp/data/testjson",
-                "serializer": "json",
-                "partitionBy": [
-                    "label_key"
-                ],
-                "saveMode": "overwrite"
-            }
-        }
-    ]
-}
-
-```
-
-### 5.2 分组模式
-
-插件有两个字段,name 为插件名,config 为具体配置
-
-配置分为**3部分**:
-
-1. sources:配置数据源,对应source类型插件
-2. transformations:配置具体操作,对应transform类型插件
-3. sinks:配置输出操作,对应sink类型插件
-
-```json
-{
-    "sources": [
-        {
-            "name": "jdbc",
-            "config": {
-                "resultTable": "spark_source_table_00001",
-                "url": "jdbc:mysql://localhost:3306/",
-                "driver": "com.mysql.jdbc.Driver",
-                "user": "test_db_rw",
-                "password": "123456",
-                "query": "select * from test_db.test_table",
-                "options": {
-                }
-            }
-        },
-        {
-           "name": "file",
-           "config": {
-              "resultTable": "spark_source_table_00002",
-              "path": "hdfs:///data/tmp/testfile.csv",
-              "serializer": "csv",
-              "columnNames": ["type", "name"],
-              "options": {
-              }
-           }
-        }
-    ],
-    "transformations": [
-        {
-            "name": "sql",
-            "config": {
-                "resultTable": "spark_transform_00001",
-                "sql": "select * from spark_source_table_00001 t1 join spark_source_table_00002 t2 on t1.type=t2.type where t1.id > 100 limit 100"
-            }
-        }
-    ],
-    "sinks": [
-        {
-            "name": "file",
-            "config": {
-                "sourceTable": "spark_transform_00001",
-                "path": "hdfs:///tmp/data/test_json",
-                "serializer": "json",
-                "partitionBy": [
-                    "label_key"
-                ],
-                "saveMode": "overwrite"
-            }
-        }
-    ]
-}
-```
-
-### 5.3 插件类型说明
-
-#### 5.3.1 Source 插件配置
-
-对应数据读取操作,可以从文件,jdbc读取文件,**仅能使用当前源的表,不能使用当前配置之前注册的临时表**
-
-**公共配置**
-
-| **字段名**   | **说明**                                                     | **字段类型**        | **是否必须** | **默认值**      |
-| ------------ | ------------------------------------------------------------ | ------------------- | ------------ | --------------- |
-| resultTable  | 注册表名,供 transform / sink 使用                           | String              | 是           | -               |
-| persist      | 是否缓存                                                     | Boolean             | 否           | false           |
-| storageLevel | 缓存级别                                                     | String              | 否           | MEMORY_AND_DISK |
-| options      | 参考 [spark 官方文档](https://spark.apache.org/docs/latest/sql-data-sources.html) | Map<String, String> | 否           |                 |
-
-##### 5.3.1.1 file
-
-**配置字段**
-
-| **字段名**  | **说明**                 | **字段类型** | **是否必须** | **默认值** |
-| ----------- | ------------------------ | ------------ | ------------ | ---------- |
-| path        | 文件路径,默认为 hdfs    | String       | 是           | -          |
-| serializer  | 文件格式,默认为 parquet | String       | 是           | parquet    |
-| columnNames | 映射的字段名             | String[]     | 否           | -          |
-
-**样例**
-
-```JSON
-{
-    "name": "file",
-    "config": {
-        "resultTable": "spark_source_table_00001",
-        "path": "hdfs:///data/tmp/test_csv_/xxx.csv",
-        "serializer": "csv",
-        "columnNames": ["id", "name"],
-        "options": {
-            "key": "value"
-        }
-    }
-}
-```
-
-##### 5.3.1.2 jdbc
-
-**配置字段**
-
-| **字段名** | **说明**                                             | **字段类型** | **是否必须** | **默认值** |
-| ---------- | ---------------------------------------------------- | ------------ | ------------ | ---------- |
-| url        | jdbc url                                 | String       | 是           | -          |
-| driver     | 驱动类(完全限定名)                                 | String       | 是           | -          |
-| user       | 用户名                                               | String       | 是           | -          |
-| password   | 密码                                                 | String       | 是           | -          |
-| query      | 查询语句,查询中使用的函数必须符合选中的数据库的规范 | String       | 是           | -          |
-
-**样例**
-
-```JSON
-{
-    "name": "jdbc",
-    "config": {
-        "resultTable": "spark_source_table_00001",
-        "url": "jdbc:mysql://localhost:3306/",
-        "driver": "com.mysql.jdbc.Driver",
-        "user": "local_root",
-        "password": "123456",
-        "query": "select a.xxx, b.xxxx from test_table where id > 100",
-        "options": {
-            "key": "value"
-        }
-    }
-}
-```
-
-##### 5.3.1.3 managed_jdbc
-
-linkis 中配置的 jdbc 数据源,会从 linkis 中获取数据源连接信息,并转为 jdbc 执行
-
-**配置字段**
-
-| **字段名** | **说明**                     | **字段类型** | **是否必须** | **默认值** |
-| ---------- |----------------------------| ------------ | ------------ | ---------- |
-| datasource | Linkis 中配置的数据源名称                       | String       | 是           | -          |
-| query      | 查询语句,查询中使用的函数必须符合选中的数据库的规范 | String       | 是           | -          |
-
-**样例**
-
-```JSON
-{
-    "name": "jdbc",
-    "config": {
-        "datasource": "mysql_test_db",
-        "query": "select a.xxx, b.xxxx from table where id > 100",
-        "options": {
-            "key": "value"
-        }
-    }
-}
-```
-
-#### 5.3.2 Transform 插件配置
-
-数据加工相关逻辑,**可以使用当前配置之前注册的所有临时表**
-
-**公共配置**
-
-| **字段名**   | **说明**                                       | **字段类型** | **是否必须** | **默认值**      |
-| ------------ | ---------------------------------------------- | ------------ | ------------ | --------------- |
-| sourceTable  | 来源表名,使用 source / transform 注册的结果表 | String       | 否           | -               |
-| resultTable  | 注册表名,供 transform / sink 使用             | String       | 是           | -               |
-| persist      | 是否缓存                                       | Boolean      | 否           | false           |
-| storageLevel | 缓存级别                                       | String       | 否           | MEMORY_AND_DISK |
-
-##### 5.3.2.1 sql
-
-**配置字段**
-
-| **字段名** | **说明**                                                     | **字段类型** | **是否必须** | **默认值** |
-| ---------- | ------------------------------------------------------------ | ------------ | ------------ | ---------- |
-| sql        | 查询的 sql,可以使用前面 sosurces 和 transformations 中的注册表名 | String       | 是           | -          |
-
-**样例**
-
-```JSON
-{
-    "name": "sql",
-    "config": {
-        "resultTable": "spark_transform_table_00001",
-        "sql": "select * from ods.ods_test_table as a join spark_source_table_00001 as b on a.vin=b.vin",
-        "cache": true
-    }
-}
-```
-
-#### 5.3.3 Sink 插件配置
-
-可以把结果写入到文件或者表,**可以使用当前配置之前注册的所有临时表**
-
-**公共变量**
-
-| **字段名**                | **说明**                                                     | **字段类型**        | **是否必须** | **默认值**                                                   |
-| ------------------------- | ------------------------------------------------------------ | ------------------- | ------------ | ------------------------------------------------------------ |
-| sourceTable / sourceQuery | soruce / transform 中的结果表名或者查询的sql语句 作为结果输出         | String              | 否           | sourceTable 和 sourceQuery 必须有一个不为空 sourceQuery 优先级更高 |
-| options                   | 参考 [spark 官方文档](https://spark.apache.org/docs/latest/sql-data-sources.html) | Map<String, String> | 否           |                                                              |
-| variables                 | 变量替换,类似 `dt="${day}"`                                   | Map<String, String> | 否           | {    "dt": "${day}",     "hour": "${hour}", }                |
-
-##### 5.3.3.1 hive
-
-**配置字段**
-
-| **字段名**     | **说明**                                                     | **字段类型** | **是否必须** | **默认值** |
-| -------------- | ------------------------------------------------------------ | ------------ | ------------ | ---------- |
-| targetDatabase | 待写入数据的表所在的数据库                                   | String       | 是           | -          |
-| targetTable    | 待写入数据的表                                               | String       | 是           | -          |
-| saveMode       | 写入模式,参考 spark,默认为 `overwrite`                       | String       | 是           | overwrite    |
-| strongCheck    | 强校验,字段名,字段顺序,字段类型必须一致                   | Boolean      | 否           | true       |
-| writeAsFile    | 按文件方式写入,可以提高效率,此时 `variables` 中必须包含所有的分区变量 | Boolean      | 否           | false      |
-| numPartitions  | 分区个数,`Dataset.repartition`                                | Integer      | 否           | 10         |
-
-**样例**
-
-```JSON
-{
-    "name": "hive",
-    "config": {
-        "sourceTable": "spark_transform_table_00001",
-        "targetTable": "dw.result_table",
-        "saveMode": "append",
-        "options": {
-            "key": "value"
-        }
-    }
-}
-```
-
-##### 5.3.3.2 jdbc
-
-**配置字段**
-
-| **字段名**     | **说明**                   | **字段类型** | **是否必须** | **默认值** |
-| -------------- |--------------------------| ------------ | ------------ | ---------- |
-| url            | jdbc url               | String       | 是           | -          |
-| driver         | 驱动类(完全限定名)               | String       | 是           | -          |
-| user           | 用户名                      | String       | 是           | -          |
-| password       | 密码                       | String       | 是           | -          |
-| targetDatabase | 待写入数据的表所在的数据库            | String       | 否           | -          |
-| targetTable    | 待写入数据的表                  | String       | 是           | -          |
-| preQueries     | 写入前执行的sql语句              | String[]     | 否           | -          |
-| numPartitions  | 分区个数,`Dataset.repartition` | Integer      | 否           | 10         |
-
-**样例**
-
-```JSON
-{
-    "name": "jdbc",
-    "config": {
-        "sourceTable": "spark_transform_table_00001",
-        "database": "mysql_test_db",
-        "targetTable": "test_001",
-        "preQueries": ["delete from test_001 where dt='${exec_date}'"],
-        "options": {
-            "key": "value"
-        }
-    }
-}
-```
-
-##### 5.3.3.3 managed_jdbc
-
-linkis 中配置的 jdbc 数据源,会从 linkis 中获取数据源连接信息,并转为 jdbc 执行
-
-**配置字段**
-
-| **字段名**          | **说明**                   | **字段类型** | **是否必须** | **默认值** |
-|------------------|--------------------------| ------------ | ------------ | ---------- |
-| targetDatasource | Linkis 中配置的数据源名称           | String       | 否           | -          |
-| targetDatabase   | 待写入数据的表所在的数据库            | String       | 否           | -          |
-| targetTable      | 待写入数据的表                  | String       | 是           | -          |
-| preQueries       | 写入前执行的sql语句              | String[]     | 否           | -          |
-| numPartitions    | 分区个数,`Dataset.repartition` | Integer      | 否           | 10         |
-
-**样例**
-
-```JSON
-{
-    "name": "jdbc",
-    "config": {
-        "targetDatasource": "spark_transform_table_00001",
-        "targetDatabase": "mysql_test_db",
-        "targetTable": "test_001",
-        "preQueries": ["delete from test_001 where dt='${exec_date}'"],
-        "options": {
-            "key": "value"
-        }
-    }
-}
-```
-
-##### 5.3.3.4 file
-
-**配置字段**
-
-| **字段名**  | **说明**                               | **字段类型** | **是否必须** | **默认值** |
-| ----------- | -------------------------------------- | ------------ | ------------ | ---------- |
-| path        | 文件路径,默认为 `hdfs`                  | String       | 是           | -          |
-| serializer  | 文件格式,默认为 `parquet`               | String       | 是           | `parquet`    |
-| partitionBy |                                        | String[]     | 否           |            |
-| saveMode    | 写入模式,参考 spark,默认为 `overwrite` | String       | 否           |   `overwrite`         |
-
-**样例**
-
-```JSON
-{
-    "name": "file",
-    "config": {
-        "sourceTable": "spark_transform_table_00001",
-        "path": "hdfs:///data/tmp/test_json/",
-        "serializer": "json", 
-        "variables": {
-            "key": "value"
-        },
-        "options": {
-            "key": "value"
-        }
-    }
-}
-```
-
-### 5.4 使用样例
-
-1. jdbc读取,写入hive表
-
-```json
-{
-    "sources": [
-        {
-            "name": "jdbc",
-            "config": {
-                "resultTable": "spark_source_0001",
-                "database": "mysql_test_db",
-                "query": "select * from test_table limit 100"
-            }
-        }
-    ],
-    "transformations": [
-    ],
-    "sinks": [
-        {
-            "name": "hive",
-            "config": {
-                "sourceTable": "spark_source_0001",
-                "targetTable": "ods.ods_test_table",
-                "saveMode": "overwrite"
-            }
-        }
-    ]
-}
-```
-
-2. hive读取,写入hive表
-
-```json
-{
-    "sources": [
-    ],
-    "transformations": [
-        {
-            "name": "sql",
-            "config": {
-                "resultTable": "spark_transform_00001",
-                "sql": "select * from ods.ods_test_table limit 100"
-            }
-        }
-    ],
-    "sinks": [
-        {
-            "name": "hive",
-            "config": {
-                "sourceTable": "spark_transform_00001",
-                "targetTable": "dw.dw_test_table",
-                "saveMode": "overwrite"
-            }
-        }
-    ]
-}
-```
-
-3. 文件读取,写入hive表
-
-```json
-{
-    "sources": [
-        {
-            "name": "file",
-            "config": {
-                "path": "hdfs:///data/tmp/test_csv/",
-                "resultTable": "spark_file_0001",
-                "serializer": "csv",
-                "columnNames": ["col1", "col2", "col3"],
-                "options": {
-                    "delimiter": ",",
-                    "header", "false"
-                }
-            }
-        }
-    ],
-    "transformations": [
-    ],
-    "sinks": [
-        {
-            "name": "hive",
-            "config": {
-                "sourceQuery": "select col1, col2 from spark_file_0001",
-                "targetTable": "ods.ods_test_table",
-                "saveMode": "overwrite",
-                "variables": {
-                }
-            }
-        }
-    ]
-}
-```
-
-4. Hive 读取,写入jdbc
-
-```json
-{
-    "sources": [
-    ],
-    "transformations": [
-    ],
-    "sinks": [
-        {
-            "name": "jdbc",
-            "config": {
-                "sourceQuery": "select * from dm.dm_result_table where dt=${exec_date}",
-                "database": "mysql_test_db",
-                "targetTable": "mysql_test_table",
-                "preQueries": ["delete from mysql_test_table where dt='${exec_date}'"],
-                "options": {
-                    "key": "value"
-                }
-            }
-        }
-    ]
-}
-```
+```
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/sqoop.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/sqoop.md
index 7fbe82e59d..9cbc69f531 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/sqoop.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/sqoop.md
@@ -74,7 +74,7 @@ ${LINKIS_HOME}/lib/linkis-engineplugins
 linkis-engineconn-plugins/
 ├── sqoop
 │   ├── dist
-│   │   └── v1.4.6
+│   │   └── 1.4.6
 │   │       ├── conf
 │   │       └── lib
 │   └── plugin
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/trino.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/trino.md
index 4d2f3e799d..37a9036b51 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/trino.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/trino.md
@@ -69,7 +69,7 @@ ${LINKIS_HOME}/lib/linkis-engineplugins
 linkis-engineconn-plugins/
 ├── trino
 │   ├── dist
-│   │   └── v371
+│   │   └── 371
 │   │       ├── conf
 │   │       └── lib
 │   └── plugin
diff --git a/static/Images-zh/deployment/token-list.png b/static/Images-zh/deployment/token-list.png
new file mode 100644
index 0000000000..ba283e97db
Binary files /dev/null and b/static/Images-zh/deployment/token-list.png differ
diff --git a/static/Images/deployment/token-list.png b/static/Images/deployment/token-list.png
new file mode 100644
index 0000000000..4efd64b411
Binary files /dev/null and b/static/Images/deployment/token-list.png differ


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org