You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@linkis.apache.org by pe...@apache.org on 2023/02/21 11:49:29 UTC

[linkis-website] branch dev updated: [feat-4238] update features and support engines (#676)

This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/linkis-website.git


The following commit(s) were added to refs/heads/dev by this push:
     new f565bd8109 [feat-4238] update features and support engines (#676)
f565bd8109 is described below

commit f565bd81099dca5f68b94d804bef6ade8fa2c5ef
Author: aiceflower <ki...@gmail.com>
AuthorDate: Tue Feb 21 19:49:23 2023 +0800

    [feat-4238] update features and support engines (#676)
    
    * update features and support engines
---
 ...2022-04-15-how-to-download-engineconn-plugin.md |  4 +-
 docs/about/introduction.md                         | 56 ++++++++++++----------
 docs/deployment/deploy-quick.md                    |  4 +-
 docs/user-guide/sdk-manual.md                      | 13 +++++
 ...2022-04-15-how-to-download-engineconn-plugin.md |  4 +-
 .../current/about/introduction.md                  | 26 ++++------
 .../current/deployment/deploy-quick.md             |  2 +-
 .../current/user-guide/sdk-manual.md               | 13 +++++
 .../version-1.3.1/about/introduction.md            | 25 ++++------
 .../version-1.3.1/deployment/deploy-quick.md       |  2 +-
 .../version-1.3.1/user-guide/sdk-manual.md         | 12 +++++
 versioned_docs/version-1.3.1/about/introduction.md | 54 +++++++++++----------
 .../version-1.3.1/deployment/deploy-quick.md       |  4 +-
 .../version-1.3.1/user-guide/sdk-manual.md         | 12 +++++
 14 files changed, 138 insertions(+), 93 deletions(-)

diff --git a/blog/2022-04-15-how-to-download-engineconn-plugin.md b/blog/2022-04-15-how-to-download-engineconn-plugin.md
index bccbc16acc..9610d8d4d1 100644
--- a/blog/2022-04-15-how-to-download-engineconn-plugin.md
+++ b/blog/2022-04-15-how-to-download-engineconn-plugin.md
@@ -36,8 +36,8 @@ In order to facilitate everyone's use, based on the release branch code of each
 |Sqoop| Sqoop >= 1.4.6, <br/>(default Apache Sqoop 1.4.6)|\>=1.1.2|No|Sqoop EngineConn, support data migration tool Sqoop engine|
 |Presto|Presto >= 0.180|\>=1.2.0|No|Presto EngineConn, supports Presto SQL code|
 |ElasticSearch|ElasticSearch >=6.0|\>=1.2.0|No|ElasticSearch EngineConn, supports SQL and DSL code|
-|Trino | 371 | >=1.3.1 | 否 |   Trino EngineConn, 支持Trino SQL 代码 |
-|Seatunnel | 2.1.2 | >=1.3.1 | 否 | Seatunnel EngineConn, 支持Seatunnel SQL 代码 |
+|Trino | Trino >=371 | >=1.3.1 | No |   Trino EngineConn, supports Trino SQL code |
+|Seatunnel |Seatunnel >=2.1.2 | >=1.3.1 | No | Seatunnel EngineConn, supportt Seatunnel SQL code |
 
 ## Install engine guide
 
diff --git a/docs/about/introduction.md b/docs/about/introduction.md
index 5fab7cc9fb..3b0bc1343a 100644
--- a/docs/about/introduction.md
+++ b/docs/about/introduction.md
@@ -16,36 +16,40 @@ Since the first release of Linkis in 2019, it has accumulated more than **700**
 
 ## Features
 
-- **Support for diverse underlying computation storage engines**:  
-    Currently supported computation/storage engines: Spark, Hive, Python, Presto, ElasticSearch, MLSQL, TiSpark,Trino, SeaTunnel, JAVA , Shell, etc;      
-    Computation/storage engines to be supported: Flink(Supported in version >=1.0.2), Impala, etc;      
-    Supported scripting languages: SparkSQL, HiveQL, Python, Shell, Pyspark, R, Scala and JDBC, etc.  
-- **Powerful task/request governance capabilities**: With services such as Orchestrator, Label Manager and customized Spring Cloud Gateway, Linkis is able to provide multi-level labels based, cross-cluster/cross-IDC fine-grained routing, load balance, multi-tenancy, traffic control, resource control, and orchestration strategies like dual-active, active-standby, etc.  
-- **Support full stack computation/storage engine**: As a computation middleware, it will receive, execute and manage tasks and requests for various computation storage engines, including batch tasks, interactive query tasks, real-time streaming tasks and storage tasks;
-- **Resource management capabilities**:  ResourceManager is not only capable of managing resources for Yarn and Linkis EngineManger as in Linkis 0.X, but also able to provide label-based multi-level resource allocation and recycling, allowing itself to have powerful resource management capabilities across multiple Yarn clusters and multiple computation resource types.
-- **Unified Context Service**: Generate Context ID for each **task**/request,  associate and manage user and system resource files (JAR, ZIP, Properties, etc.), result set, parameter variable, function, etc., across user, system, and computing engine. Set in one place, automatic reference everywhere.
-- **Unified materials**: System and user-level unified material management, which can be shared and transferred across users and systems.
-- **Unified Data Source Manage**: Provides functions such as adding, deleting, checking, and modifying data sources of hive, es, mysql, and kafka types, version control, and connection testing.
-- **Unified MetaData Query**: Provides database, table, and partition queries for hive, es, mysql, and kafka metadata.
+- **Support for diverse underlying computation storage engines** : Spark, Hive, Python, Shell, Flink, JDBC, Pipeline, Sqoop, OpenLooKeng, Presto, ElasticSearch, Trino, SeaTunnel, etc.;
+
+- **Support for diverse language** : SparkSQL, HiveSQL, Python, Shell, Pyspark, Scala, JSON and Java;
+
+- **Powerful computing governance capability** : It can provide task routing, load balancing, multi-tenant, traffic control, resource control and other capabilities based on multi-level labels;
+
+- **Support full stack computation/storage engine** : The ability to receive, execute and manage tasks and requests for various compute and storage engines, including offline batch tasks, interactive query tasks, real-time streaming tasks and data lake tasks;
+
+- **Unified context service** : supports cross-user, system and computing engine to associate and manage user and system resource files (JAR, ZIP, Properties, etc.), result sets, parameter variables, functions, UDFs, etc., one setting, automatic reference everywhere;
+
+- **Unified materials** : provides system and user level material management, can share and flow, share materials across users, across systems;
+
+- **Unified data source management** : provides the ability to add, delete, check and change information of Hive, ElasticSearch, Mysql, Kafka, MongoDB and other data sources, version control, connection test, and query metadata information of corresponding data sources;
+
+- **Error code capability** : provides error codes and solutions for common errors of tasks, which is convenient for users to locate problems by themselves;
+
 
 ## Supported engine types
 
-| **Engine** | **Supported Version** | **Linkis Version Requirements**| **Included in Release Package By Default** | **Description** |
+| **Engine name** | **Support underlying component version<br/>(default dependency version)** | **Linkis Version Requirements** | **Included in Release Package By Default** | **Description** |
 |:---- |:---- |:---- |:---- |:---- |
-|Flink |1.12.2|\>=dev-0.12.0, PR #703 not merged yet.|>=1.0.2|	Flink EngineConn. Supports FlinkSQL code, and also supports Flink Jar to Linkis Manager to start a new Yarn application.|
-|Impala|\>=3.2.0, CDH >=6.3.0"|\>=dev-0.12.0, PR #703 not merged yet.|ongoing|Impala EngineConn. Supports Impala SQL.|
-|Presto|\>= 0.180|\>=0.11.0|ongoing|Presto EngineConn. Supports Presto SQL.|
-|ElasticSearch|\>=6.0|\>=0.11.0|ongoing|ElasticSearch EngineConn. Supports SQL and DSL code.|
-|Shell|Bash >=2.0|\>=0.9.3|\>=1.0.0_rc1|Shell EngineConn. Supports shell code.|
-|MLSQL|\>=1.1.0|\>=0.9.1|ongoing|MLSQL EngineConn. Supports MLSQL code.|
-|JDBC|MySQL >=5.0, Hive >=1.2.1|\>=0.9.0|\>=1.0.0_rc1|JDBC EngineConn. Supports MySQL and HiveQL code.|
-|Spark|Apache 2.0.0~2.4.7, CDH >=5.4.0|\>=0.5.0|\>=1.0.0_rc1|Spark EngineConn. Supports SQL, Scala, Pyspark and R code.|
-|Hive|Apache >=1.0.0, CDH >=5.4.0|\>=0.5.0|\>=1.0.0_rc1|Hive EngineConn. Supports HiveQL code.|
-|Hadoop|Apache >=2.6.0, CDH >=5.4.0|\>=0.5.0|ongoing|Hadoop EngineConn. Supports Hadoop MR/YARN application.|
-|Python|\>=2.6|\>=0.5.0|\>=1.0.0_rc1|Python EngineConn. Supports python code.|
-|TiSpark|1.1|\>=0.5.0|ongoing|TiSpark EngineConn. Support querying TiDB data by SparkSQL.|
-|Trino | 371 | >=1.3.1 | 否 |   Trino EngineConn, Support Trino SQL code |
-|Seatunnel | 2.1.2 | >=1.3.1 | 否 | Seatunnel EngineConn, Support Seatunnel SQL code |
+|Spark|Apache 2.0.0~2.4.7, <br/>CDH >= 5.4.0, <br/>(default Apache Spark 2.4.3)|\>=1.0.3|Yes|Spark EngineConn, supports SQL , Scala, Pyspark and R code|
+|Hive|Apache >= 1.0.0, <br/>CDH >= 5.4.0, <br/>(default Apache Hive 2.3.3)|\>=1.0.3|Yes|Hive EngineConn, supports HiveQL code|
+|Python|Python >= 2.6, <br/>(default Python2*)|\>=1.0.3|Yes|Python EngineConn, supports python code|
+|Shell|Bash >= 2.0|\>=1.0.3|Yes|Shell EngineConn, supports Bash shell code|
+|JDBC|MySQL >= 5.0, Hive >=1.2.1, <br/>(default Hive-jdbc 2.3.4)|\>=1.0.3|No |JDBC EngineConn, already supports MySQL and HiveQL, can be extended quickly Support other engines with JDBC Driver package, such as Oracle|
+|Flink |Flink >= 1.12.2, <br/>(default Apache Flink 1.12.2)|\>=1.0.2|No |Flink EngineConn, supports FlinkSQL code, also supports starting a new Yarn in the form of Flink Jar Application|
+|Pipeline|-|\>=1.0.2|No|Pipeline EngineConn, supports file import and export|
+|openLooKeng|openLooKeng >= 1.5.0, <br/>(default openLookEng 1.5.0)|\>=1.1.1|No|openLooKeng EngineConn, supports querying data virtualization engine with Sql openLooKeng|
+|Sqoop| Sqoop >= 1.4.6, <br/>(default Apache Sqoop 1.4.6)|\>=1.1.2|No|Sqoop EngineConn, support data migration tool Sqoop engine|
+|Presto|Presto >= 0.180|\>=1.2.0|No|Presto EngineConn, supports Presto SQL code|
+|ElasticSearch|ElasticSearch >=6.0|\>=1.2.0|No|ElasticSearch EngineConn, supports SQL and DSL code|
+|Trino | Trino >=371 | >=1.3.1 | No |   Trino EngineConn, supports Trino SQL code |
+|Seatunnel | Seatunnel >=2.1.2 | >=1.3.1 | No | Seatunnel EngineConn, supportt Seatunnel SQL code |
 ## Download
 
 Please go to the [Linkis releases page](https://github.com/apache/linkis/releases) to download a compiled distribution or a source code package of Linkis.
diff --git a/docs/deployment/deploy-quick.md b/docs/deployment/deploy-quick.md
index 5f4b8f337f..e6ce809b4f 100644
--- a/docs/deployment/deploy-quick.md
+++ b/docs/deployment/deploy-quick.md
@@ -286,16 +286,14 @@ The Linkis will start 6 microservices by default, and the linkis-cg-engineconn s
 
 ```shell script
 LINKIS-CG-ENGINECONNMANAGER Engine Management Services
-LINKIS-CG-ENGINEPLUGIN Engine Plugin Management Service
 LINKIS-CG-ENTRANCE Computing Governance Entry Service
 LINKIS-CG-LINKISMANAGER Computing Governance Management Service
 LINKIS-MG-EUREKA Microservice registry service
 LINKIS-MG-GATEWAY gateway service
-LINKIS-PS-CS context service
 LINKIS-PS-PUBLICSERVICE Public Service
 ````
 
-Note: Linkis-ps-cs, Linkis-ps-data-source-Manager and Linkis-Ps-Metadatamanager services have been merged into Linkis-Ps-PublicService in Linkis 1.3.1 and merge LINKIS-CG-ENGINECONNMANAGER services into LINKIS-CG-LINKISMANAGER.
+Note: LINKIS-PS-CS, LINKIS-PS-DATA-SOURCE-MANAGER、LINKIS-PS-METADATAMANAGER services have been merged into LINKIS-PS-PUBLICSERVICE in Linkis 1.3.1 and merge LINKIS-CG-ENGINEPLUGIN services into LINKIS-CG-LINKISMANAGER.
 
 If any services are not started, you can view detailed exception logs in the corresponding log/${service name}.log file.
 
diff --git a/docs/user-guide/sdk-manual.md b/docs/user-guide/sdk-manual.md
index 793799833e..765a27fe94 100644
--- a/docs/user-guide/sdk-manual.md
+++ b/docs/user-guide/sdk-manual.md
@@ -101,6 +101,19 @@ sidebar_position: 3
   </tr >
 </table>
 
+
+**Linkis common label**
+
+|label key|label value|description|
+|:-|:-|:-|
+|engineType| spark-2.4.3 | the engine type and version |
+|userCreator|  user + "-AppName" |  the running user and your AppName |
+|codeType| sql | script type|
+|jobRunningTimeout| 10 | If the job does not finish for 10s, it will automatically initiate Kill. The unit is s |
+|jobQueuingTimeout| 10|If the job queue exceeds 10s and fails to complete, Kill will be automatically initiated. The unit is s|
+|jobRetryTimeout|  10000| The waiting time for a job to fail due to resources or other reasons is ms. If a job fails due to insufficient queue resources, the retry is initiated 10 times by default |
+|tenant| hduser02|  tenant label  |
+
 ## 1. Import dependent modules
 ```
 <dependency>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-blog/2022-04-15-how-to-download-engineconn-plugin.md b/i18n/zh-CN/docusaurus-plugin-content-blog/2022-04-15-how-to-download-engineconn-plugin.md
index 87e2013b84..f71f6d52f6 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-blog/2022-04-15-how-to-download-engineconn-plugin.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-blog/2022-04-15-how-to-download-engineconn-plugin.md
@@ -36,8 +36,8 @@ tags: [engine,guide]
 |Sqoop| Sqoop >= 1.4.6, <br/>(默认Apache Sqoop 1.4.6)|\>=1.1.2|否|Sqoop EngineConn, 支持 数据迁移工具 Sqoop 引擎|
 |Presto|Presto >= 0.180|\>=1.2.0|否|Presto EngineConn, 支持Presto SQL 代码|
 |ElasticSearch|ElasticSearch >=6.0|\>=1.2.0|否|ElasticSearch EngineConn, 支持SQL 和DSL 代码|
-|Trino | 371 | >=1.3.1 | 否 |   Trino EngineConn, 支持Trino SQL 代码 |
-|Seatunnel | 2.1.2 | >=1.3.1 | 否 | Seatunnel EngineConn, 支持Seatunnel SQL 代码 |
+|Trino | Trino >=371 | >=1.3.1 | 否 |   Trino EngineConn, 支持Trino SQL 代码 |
+|Seatunnel | Seatunnel >=2.1.2 | >=1.3.1 | 否 | Seatunnel EngineConn, 支持Seatunnel SQL 代码 |
 
 ## 安装引擎指引 
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/about/introduction.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/about/introduction.md
index b76b7a19a5..50faf8c06e 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/about/introduction.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/about/introduction.md
@@ -13,16 +13,14 @@ Linkis 自2019年开源发布以来,已累计积累了700多家试用企业和
 ![有了Linkis 之后](/Images-zh/after_linkis_cn.png)
 
 ## 核心特点
-- **丰富的底层计算存储引擎支持**: 
-    **目前支持的计算存储引擎**:Spark、Hive、Flink、Python、Pipeline、Sqoop、openLooKeng、Presto、ElasticSearch、Trino、SeaTunnel、JAVA 和Shell等。  
-    **支持的脚本语言**:SparkSQL, HiveQL, Python, Shell, Pyspark, R, Scala 和JDBC 等。    
-- **强大的计算治理能力**: 基于Orchestrator、Label Manager和定制的Spring Cloud Gateway等服务,Linkis能够提供基于多级标签的跨集群/跨IDC 细粒度路由、负载均衡、多租户、流量控制、资源控制和编排策略(如双活、主备等)支持能力。  
-- **全栈计算存储引擎架构支持**: 能够接收、执行和管理针对各种计算存储引擎的任务和请求,包括离线批量任务、交互式查询任务、实时流式任务和存储型任务。
-- **资源管理能力**:  Linkis中的ResourceManager 不仅具备对 Yarn 和 Linkis EngineManager 的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让 ResourceManager 具备跨集群、跨计算资源类型的强大资源管理能力。
-- **统一上下文服务**:为每个计算任务生成context id,跨用户、系统、计算引擎的关联管理用户和系统资源文件(JAR、ZIP、Properties等),结果集,参数变量,函数等,一处设置,处处自动引用。
-- **统一物料**: 系统和用户级物料管理,可分享和流转,跨用户、系统共享物料。
-- **统一数据源管理**: 提供了基于hive、es、mysql、kafka类型数据源的增删查改、版本控制、连接测试等功能。
-- **数据源对应的元数据查询**: 提供了基于hive、es、mysql、kafka元数据的数据库、表、分区查询。
+- **丰富的底层计算存储引擎支持**:Spark、Hive、Python、Shell、Flink、JDBC、Pipeline、Sqoop、OpenLooKeng、Presto、ElasticSearch、Trino、SeaTunnel 等;
+- **丰富的语言支持**:SparkSQL、HiveSQL、Python、Shell、Pyspark、Scala、JSON 和 Java 等;    
+- **强大的计算治理能力**: 能够提供基于多级标签的任务路由、负载均衡、多租户、流量控制、资源控制等能力; 
+- **全栈计算存储引擎架构支持**:  能够接收、执行和管理针对各种计算存储引擎的任务和请求,包括离线批量任务、交互式查询任务、实时流式任务和数据湖任务;
+- **统一上下文服务**:支持跨用户、系统、计算引擎去关联管理用户和系统的资源文件(JAR、ZIP、Properties 等),结果集、参数变量、函数、UDF等,一处设置,处处自动引用;
+- **统一物料**: 提供了系统和用户级物料管理,可分享和流转,跨用户、跨系统共享物料;
+- **统一数据源管理**:  提供了Hive、ElasticSearch、Mysql、Kafka、MongoDB 等类型数据源信息的增删查改、版本控制、连接测试和对应数据源的元数据信息查询能力;
+- **错误码能力**:提供了任务常见错误的错误码和解决方案,方便用户自助定位问题;
 
 ## 支持的引擎类型
 | **引擎名** | **支持底层组件版本<br/>(默认依赖版本)** | **Linkis 1.X 版本要求** | **是否默认包含在发布包中** | **说明** |
@@ -38,12 +36,8 @@ Linkis 自2019年开源发布以来,已累计积累了700多家试用企业和
 |Sqoop| Sqoop >= 1.4.6, <br/>(默认Apache Sqoop 1.4.6)|\>=1.1.2|否|Sqoop EngineConn, 支持 数据迁移工具 Sqoop 引擎。|
 |Presto|Presto >= 0.180|\>=1.2.0|否|Presto EngineConn, 支持Presto SQL 代码。|
 |ElasticSearch|ElasticSearch >=6.0|\>=1.2.0|否|ElasticSearch EngineConn, 支持SQL 和DSL 代码。|
-|Impala|Impala >= 3.2.0, CDH >=6.3.0|ongoing|-|Impala EngineConn,支持Impala SQL 代码。|
-|MLSQL| MLSQL >=1.1.0|ongoing|-|MLSQL EngineConn, 支持MLSQL 代码。|
-|Hadoop|Apache >=2.6.0, <br/>CDH >=5.4.0|ongoing|-|Hadoop EngineConn, 支持Hadoop MR/YARN application。|
-|TiSpark|1.1|ongoing|-|TiSpark EngineConn, 支持用SparkSQL 查询TiDB。|
-|Trino | 371 | >=1.3.1 | 否 |   Trino EngineConn, 支持Trino SQL 代码 |
-|Seatunnel | 2.1.2 | >=1.3.1 | 否 | Seatunnel EngineConn, 支持Seatunnel SQL 代码 |
+|Trino | Trino >=371 | >=1.3.1 | 否 |   Trino EngineConn, 支持Trino SQL 代码 |
+|Seatunnel | Seatunnel >=2.1.2 | >=1.3.1 | 否 | Seatunnel EngineConn, 支持Seatunnel SQL 代码 |
 
 
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md
index 3479aef4d8..fae35c6a20 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md
@@ -286,7 +286,7 @@ LINKIS-MG-GATEWAY  网关服务
 LINKIS-PS-PUBLICSERVICE 公共服务 
 ```
 
-注意:在 Linkis 1.3.1 中已将 LINKIS-PS-CS、LINKIS-PS-DATA-SOURCE-MANAGER、LINKIS-PS-METADATAMANAGER服务合并到LINKIS-PS-PUBLICSERVICE,将LINKIS-CG-ENGINECONNMANAGER服务合并到LINKIS-CG-LINKISMANAGER。
+注意:在 Linkis 1.3.1 中已将 LINKIS-PS-CS、LINKIS-PS-DATA-SOURCE-MANAGER、LINKIS-PS-METADATAMANAGER服务合并到LINKIS-PS-PUBLICSERVICE,将LINKIS-CG-ENGINEPLUGIN服务合并到LINKIS-CG-LINKISMANAGER。
 
 如果有服务未启动,可以在对应的log/${服务名}.log文件中查看详细异常日志。
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/sdk-manual.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/sdk-manual.md
index 9bc06c4b24..f3136770ee 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/sdk-manual.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/sdk-manual.md
@@ -101,6 +101,19 @@ sidebar_position: 3
   </tr >
 </table>
 
+**Linkis 常用标签**
+
+|标签键|标签值|说明|
+|:-|:-|:-|
+|engineType| spark-2.4.3 |  指定引擎类型和版本|
+|userCreator|  user + "-AppName" | 指定运行的用户和您的APPName|
+|codeType| sql | 指定运行的脚本类型|
+|jobRunningTimeout| 10 | job运行10s没完成自动发起Kill,单位为s|
+|jobQueuingTimeout| 10| job排队超过10s没完成自动发起Kill,单位为s|
+|jobRetryTimeout|  10000| job因为资源等原因失败重试的等待时间,单位为ms,如果因为队列资源不足的失败,会默认按间隔发起10次重试|
+|tenant| hduser02| 租户标签,设置前需要和BDP沟通需要单独机器进行隔离,则任务会被路由到单独的机器|
+
+
 ## 1. 引入依赖模块
 ```
 <dependency>
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.1/about/introduction.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.1/about/introduction.md
index 8f3d468e70..e4b83f642f 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.1/about/introduction.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.1/about/introduction.md
@@ -13,17 +13,14 @@ Linkis 自2019年开源发布以来,已累计积累了700多家试验企业和
 ![有了Linkis 之后](/Images-zh/after_linkis_cn.png)
 
 ## 核心特点
-- **丰富的底层计算存储引擎支持**: 
-    **目前支持的计算存储引擎**:Spark、Hive、Flink、Python、Pipeline、Sqoop、openLooKeng、Presto、ElasticSearch、JDBC和Shell等。  
-    **正在支持中的计算存储引擎**:Trino(计划1.3.1)、SeaTunnel(计划1.3.1)等。  
-    **支持的脚本语言**:SparkSQL, HiveQL, Python, Shell, Pyspark, R, Scala 和JDBC 等。    
-- **强大的计算治理能力**: 基于Orchestrator、Label Manager和定制的Spring Cloud Gateway等服务,Linkis能够提供基于多级标签的跨集群/跨IDC 细粒度路由、负载均衡、多租户、流量控制、资源控制和编排策略(如双活、主备等)支持能力。  
-- **全栈计算存储引擎架构支持**: 能够接收、执行和管理针对各种计算存储引擎的任务和请求,包括离线批量任务、交互式查询任务、实时流式任务和存储型任务。
-- **资源管理能力**:  ResourceManager 不仅具备对 Yarn 和 Linkis EngineManager 的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让 ResourceManager 具备跨集群、跨计算资源类型的强大资源管理能力。
-- **统一上下文服务**:为每个计算任务生成context id,跨用户、系统、计算引擎的关联管理用户和系统资源文件(JAR、ZIP、Properties等),结果集,参数变量,函数等,一处设置,处处自动引用。
-- **统一物料**: 系统和用户级物料管理,可分享和流转,跨用户、系统共享物料。
-- **统一数据源管理**: 提供了hive、es、mysql、kafka类型数据源的增删查改、版本控制、连接测试等功能。
-- **数据源对应的元数据查询**: 提供了hive、es、mysql、kafka元数据的数据库、表、分区查询。
+- **丰富的底层计算存储引擎支持**:Spark、Hive、Python、Shell、Flink、JDBC、Pipeline、Sqoop、OpenLooKeng、Presto、ElasticSearch、Trino、SeaTunnel 等;
+- **丰富的语言支持**:SparkSQL、HiveSQL、Python、Shell、Pyspark、Scala、JSON 和 Java 等;    
+- **强大的计算治理能力**: 能够提供基于多级标签的任务路由、负载均衡、多租户、流量控制、资源控制等能力; 
+- **全栈计算存储引擎架构支持**:  能够接收、执行和管理针对各种计算存储引擎的任务和请求,包括离线批量任务、交互式查询任务、实时流式任务和数据湖任务;
+- **统一上下文服务**:支持跨用户、系统、计算引擎去关联管理用户和系统的资源文件(JAR、ZIP、Properties 等),结果集、参数变量、函数、UDF等,一处设置,处处自动引用;
+- **统一物料**: 提供了系统和用户级物料管理,可分享和流转,跨用户、跨系统共享物料;
+- **统一数据源管理**:  提供了Hive、ElasticSearch、Mysql、Kafka、MongoDB 等类型数据源信息的增删查改、版本控制、连接测试和对应数据源的元数据信息查询能力;
+- **错误码能力**:提供了任务常见错误的错误码和解决方案,方便用户自助定位问题;
 
 ## 支持的引擎类型
 | **引擎名** | **支持底层组件版本<br/>(默认依赖版本)** | **Linkis 1.X 版本要求** | **是否默认包含在发布包中** | **说明** |
@@ -39,10 +36,8 @@ Linkis 自2019年开源发布以来,已累计积累了700多家试验企业和
 |Sqoop| Sqoop >= 1.4.6, <br/>(默认Apache Sqoop 1.4.6)|\>=1.1.2|否|Sqoop EngineConn, 支持 数据迁移工具 Sqoop 引擎。|
 |Presto|Presto >= 0.180|\>=1.2.0|否|Presto EngineConn, 支持Presto SQL 代码。|
 |ElasticSearch|ElasticSearch >=6.0|\>=1.2.0|否|ElasticSearch EngineConn, 支持SQL 和DSL 代码。|
-|Impala|Impala >= 3.2.0, CDH >=6.3.0|ongoing|-|Impala EngineConn,支持Impala SQL 代码。|
-|MLSQL| MLSQL >=1.1.0|ongoing|-|MLSQL EngineConn, 支持MLSQL 代码。|
-|Hadoop|Apache >=2.6.0, <br/>CDH >=5.4.0|ongoing|-|Hadoop EngineConn, 支持Hadoop MR/YARN application。|
-|TiSpark|1.1|ongoing|-|TiSpark EngineConn, 支持用SparkSQL 查询TiDB。|
+|Trino | Trino >=371 | >=1.3.1 | 否 |   Trino EngineConn, 支持Trino SQL 代码 |
+|Seatunnel | Seatunnel >=2.1.2 | >=1.3.1 | 否 | Seatunnel EngineConn, 支持Seatunnel SQL 代码 |
 
 
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.1/deployment/deploy-quick.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.1/deployment/deploy-quick.md
index 3479aef4d8..fae35c6a20 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.1/deployment/deploy-quick.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.1/deployment/deploy-quick.md
@@ -286,7 +286,7 @@ LINKIS-MG-GATEWAY  网关服务
 LINKIS-PS-PUBLICSERVICE 公共服务 
 ```
 
-注意:在 Linkis 1.3.1 中已将 LINKIS-PS-CS、LINKIS-PS-DATA-SOURCE-MANAGER、LINKIS-PS-METADATAMANAGER服务合并到LINKIS-PS-PUBLICSERVICE,将LINKIS-CG-ENGINECONNMANAGER服务合并到LINKIS-CG-LINKISMANAGER。
+注意:在 Linkis 1.3.1 中已将 LINKIS-PS-CS、LINKIS-PS-DATA-SOURCE-MANAGER、LINKIS-PS-METADATAMANAGER服务合并到LINKIS-PS-PUBLICSERVICE,将LINKIS-CG-ENGINEPLUGIN服务合并到LINKIS-CG-LINKISMANAGER。
 
 如果有服务未启动,可以在对应的log/${服务名}.log文件中查看详细异常日志。
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.1/user-guide/sdk-manual.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.1/user-guide/sdk-manual.md
index 9bc06c4b24..391764e9fb 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.1/user-guide/sdk-manual.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.1/user-guide/sdk-manual.md
@@ -101,6 +101,18 @@ sidebar_position: 3
   </tr >
 </table>
 
+**Linkis 常用标签**
+
+|标签键|标签值|说明|
+|:-|:-|:-|
+|engineType| spark-2.4.3 |  指定引擎类型和版本|
+|userCreator|  user + "-AppName" | 指定运行的用户和您的APPName|
+|codeType| sql | 指定运行的脚本类型|
+|jobRunningTimeout| 10 | job运行10s没完成自动发起Kill,单位为s|
+|jobQueuingTimeout| 10| job排队超过10s没完成自动发起Kill,单位为s|
+|jobRetryTimeout|  10000| job因为资源等原因失败重试的等待时间,单位为ms,如果因为队列资源不足的失败,会默认按间隔发起10次重试|
+|tenant| hduser02| 租户标签,设置前需要和BDP沟通需要单独机器进行隔离,则任务会被路由到单独的机器|
+
 ## 1. 引入依赖模块
 ```
 <dependency>
diff --git a/versioned_docs/version-1.3.1/about/introduction.md b/versioned_docs/version-1.3.1/about/introduction.md
index 9346e86ffa..78b228e9dc 100644
--- a/versioned_docs/version-1.3.1/about/introduction.md
+++ b/versioned_docs/version-1.3.1/about/introduction.md
@@ -16,34 +16,40 @@ Since the first release of Linkis in 2019, it has accumulated more than **700**
 
 ## Features
 
-- **Support for diverse underlying computation storage engines**:  
-    Currently supported computation/storage engines: Spark, Hive, Python, Presto, ElasticSearch, MLSQL, TiSpark, JDBC, Shell, etc;      
-    Computation/storage engines to be supported: Flink(Supported in version >=1.0.2), Impala, etc;      
-    Supported scripting languages: SparkSQL, HiveQL, Python, Shell, Pyspark, R, Scala and JDBC, etc.  
-- **Powerful task/request governance capabilities**: With services such as Orchestrator, Label Manager and customized Spring Cloud Gateway, Linkis is able to provide multi-level labels based, cross-cluster/cross-IDC fine-grained routing, load balance, multi-tenancy, traffic control, resource control, and orchestration strategies like dual-active, active-standby, etc.  
-- **Support full stack computation/storage engine**: As a computation middleware, it will receive, execute and manage tasks and requests for various computation storage engines, including batch tasks, interactive query tasks, real-time streaming tasks and storage tasks;
-- **Resource management capabilities**:  ResourceManager is not only capable of managing resources for Yarn and Linkis EngineManger as in Linkis 0.X, but also able to provide label-based multi-level resource allocation and recycling, allowing itself to have powerful resource management capabilities across multiple Yarn clusters and multiple computation resource types.
-- **Unified Context Service**: Generate Context ID for each **task**/request,  associate and manage user and system resource files (JAR, ZIP, Properties, etc.), result set, parameter variable, function, etc., across user, system, and computing engine. Set in one place, automatic reference everywhere.
-- **Unified materials**: System and user-level unified material management, which can be shared and transferred across users and systems.
-- **Unified Data Source Manage**: Provides functions such as adding, deleting, checking, and modifying data sources of hive, es, mysql, and kafka types, version control, and connection testing.
-- **Unified MetaData Manage**: Provides database, table, and partition queries for hive, es, mysql, and kafka metadata.
+- **Support for diverse underlying computation storage engines** : Spark, Hive, Python, Shell, Flink, JDBC, Pipeline, Sqoop, OpenLooKeng, Presto, ElasticSearch, Trino, SeaTunnel, etc.;
+
+- **Support for diverse language** : SparkSQL, HiveSQL, Python, Shell, Pyspark, Scala, JSON and Java;
+
+- **Powerful computing governance capability** : It can provide task routing, load balancing, multi-tenant, traffic control, resource control and other capabilities based on multi-level labels;
+
+- **Support full stack computation/storage engine** : The ability to receive, execute and manage tasks and requests for various compute and storage engines, including offline batch tasks, interactive query tasks, real-time streaming tasks and data lake tasks;
+
+- **Unified context service** : supports cross-user, system and computing engine to associate and manage user and system resource files (JAR, ZIP, Properties, etc.), result sets, parameter variables, functions, UDFs, etc., one setting, automatic reference everywhere;
+
+- **Unified materials** : provides system and user level material management, can share and flow, share materials across users, across systems;
+
+- **Unified data source management** : provides the ability to add, delete, check and change information of Hive, ElasticSearch, Mysql, Kafka, MongoDB and other data sources, version control, connection test, and query metadata information of corresponding data sources;
+
+- **Error code capability** : provides error codes and solutions for common errors of tasks, which is convenient for users to locate problems by themselves;
+
 
 ## Supported engine types
 
-| **Engine** | **Supported Version** | **Linkis 0.X version requirement**| **Linkis 1.X version requirement** | **Description** |
+| **Engine name** | **Support underlying component version<br/>(default dependency version)** | **Linkis Version Requirements** | **Included in Release Package By Default** | **Description** |
 |:---- |:---- |:---- |:---- |:---- |
-|Flink |1.12.2|\>=dev-0.12.0, PR #703 not merged yet.|>=1.0.2|	Flink EngineConn. Supports FlinkSQL code, and also supports Flink Jar to Linkis Manager to start a new Yarn application.|
-|Impala|\>=3.2.0, CDH >=6.3.0"|\>=dev-0.12.0, PR #703 not merged yet.|ongoing|Impala EngineConn. Supports Impala SQL.|
-|Presto|\>= 0.180|\>=0.11.0|ongoing|Presto EngineConn. Supports Presto SQL.|
-|ElasticSearch|\>=6.0|\>=0.11.0|ongoing|ElasticSearch EngineConn. Supports SQL and DSL code.|
-|Shell|Bash >=2.0|\>=0.9.3|\>=1.0.0_rc1|Shell EngineConn. Supports shell code.|
-|MLSQL|\>=1.1.0|\>=0.9.1|ongoing|MLSQL EngineConn. Supports MLSQL code.|
-|JDBC|MySQL >=5.0, Hive >=1.2.1|\>=0.9.0|\>=1.0.0_rc1|JDBC EngineConn. Supports MySQL and HiveQL code.|
-|Spark|Apache 2.0.0~2.4.7, CDH >=5.4.0|\>=0.5.0|\>=1.0.0_rc1|Spark EngineConn. Supports SQL, Scala, Pyspark and R code.|
-|Hive|Apache >=1.0.0, CDH >=5.4.0|\>=0.5.0|\>=1.0.0_rc1|Hive EngineConn. Supports HiveQL code.|
-|Hadoop|Apache >=2.6.0, CDH >=5.4.0|\>=0.5.0|ongoing|Hadoop EngineConn. Supports Hadoop MR/YARN application.|
-|Python|\>=2.6|\>=0.5.0|\>=1.0.0_rc1|Python EngineConn. Supports python code.|
-|TiSpark|1.1|\>=0.5.0|ongoing|TiSpark EngineConn. Support querying TiDB data by SparkSQL.|
+|Spark|Apache 2.0.0~2.4.7, <br/>CDH >= 5.4.0, <br/>(default Apache Spark 2.4.3)|\>=1.0.3|Yes|Spark EngineConn, supports SQL , Scala, Pyspark and R code|
+|Hive|Apache >= 1.0.0, <br/>CDH >= 5.4.0, <br/>(default Apache Hive 2.3.3)|\>=1.0.3|Yes|Hive EngineConn, supports HiveQL code|
+|Python|Python >= 2.6, <br/>(default Python2*)|\>=1.0.3|Yes|Python EngineConn, supports python code|
+|Shell|Bash >= 2.0|\>=1.0.3|Yes|Shell EngineConn, supports Bash shell code|
+|JDBC|MySQL >= 5.0, Hive >=1.2.1, <br/>(default Hive-jdbc 2.3.4)|\>=1.0.3|No |JDBC EngineConn, already supports MySQL and HiveQL, can be extended quickly Support other engines with JDBC Driver package, such as Oracle|
+|Flink |Flink >= 1.12.2, <br/>(default Apache Flink 1.12.2)|\>=1.0.2|No |Flink EngineConn, supports FlinkSQL code, also supports starting a new Yarn in the form of Flink Jar Application|
+|Pipeline|-|\>=1.0.2|No|Pipeline EngineConn, supports file import and export|
+|openLooKeng|openLooKeng >= 1.5.0, <br/>(default openLookEng 1.5.0)|\>=1.1.1|No|openLooKeng EngineConn, supports querying data virtualization engine with Sql openLooKeng|
+|Sqoop| Sqoop >= 1.4.6, <br/>(default Apache Sqoop 1.4.6)|\>=1.1.2|No|Sqoop EngineConn, support data migration tool Sqoop engine|
+|Presto|Presto >= 0.180|\>=1.2.0|No|Presto EngineConn, supports Presto SQL code|
+|ElasticSearch|ElasticSearch >=6.0|\>=1.2.0|No|ElasticSearch EngineConn, supports SQL and DSL code|
+|Trino | Trino >=371 | >=1.3.1 | No |   Trino EngineConn, supports Trino SQL code |
+|Seatunnel | Seatunnel >=2.1.2 | >=1.3.1 | No | Seatunnel EngineConn, supportt Seatunnel SQL code |
 
 ## Download
 
diff --git a/versioned_docs/version-1.3.1/deployment/deploy-quick.md b/versioned_docs/version-1.3.1/deployment/deploy-quick.md
index 5f4b8f337f..e6ce809b4f 100644
--- a/versioned_docs/version-1.3.1/deployment/deploy-quick.md
+++ b/versioned_docs/version-1.3.1/deployment/deploy-quick.md
@@ -286,16 +286,14 @@ The Linkis will start 6 microservices by default, and the linkis-cg-engineconn s
 
 ```shell script
 LINKIS-CG-ENGINECONNMANAGER Engine Management Services
-LINKIS-CG-ENGINEPLUGIN Engine Plugin Management Service
 LINKIS-CG-ENTRANCE Computing Governance Entry Service
 LINKIS-CG-LINKISMANAGER Computing Governance Management Service
 LINKIS-MG-EUREKA Microservice registry service
 LINKIS-MG-GATEWAY gateway service
-LINKIS-PS-CS context service
 LINKIS-PS-PUBLICSERVICE Public Service
 ````
 
-Note: Linkis-ps-cs, Linkis-ps-data-source-Manager and Linkis-Ps-Metadatamanager services have been merged into Linkis-Ps-PublicService in Linkis 1.3.1 and merge LINKIS-CG-ENGINECONNMANAGER services into LINKIS-CG-LINKISMANAGER.
+Note: LINKIS-PS-CS, LINKIS-PS-DATA-SOURCE-MANAGER、LINKIS-PS-METADATAMANAGER services have been merged into LINKIS-PS-PUBLICSERVICE in Linkis 1.3.1 and merge LINKIS-CG-ENGINEPLUGIN services into LINKIS-CG-LINKISMANAGER.
 
 If any services are not started, you can view detailed exception logs in the corresponding log/${service name}.log file.
 
diff --git a/versioned_docs/version-1.3.1/user-guide/sdk-manual.md b/versioned_docs/version-1.3.1/user-guide/sdk-manual.md
index 793799833e..5794830b64 100644
--- a/versioned_docs/version-1.3.1/user-guide/sdk-manual.md
+++ b/versioned_docs/version-1.3.1/user-guide/sdk-manual.md
@@ -101,6 +101,18 @@ sidebar_position: 3
   </tr >
 </table>
 
+**Linkis common label**
+
+|label key|label value|description|
+|:-|:-|:-|
+|engineType| spark-2.4.3 | the engine type and version |
+|userCreator|  user + "-AppName" |  the running user and your AppName |
+|codeType| sql | script type|
+|jobRunningTimeout| 10 | If the job does not finish for 10s, it will automatically initiate Kill. The unit is s |
+|jobQueuingTimeout| 10|If the job queue exceeds 10s and fails to complete, Kill will be automatically initiated. The unit is s|
+|jobRetryTimeout|  10000| The waiting time for a job to fail due to resources or other reasons is ms. If a job fails due to insufficient queue resources, the retry is initiated 10 times by default |
+|tenant| hduser02|  tenant label  |
+
 ## 1. Import dependent modules
 ```
 <dependency>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org