You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@linkis.apache.org by le...@apache.org on 2022/06/18 02:24:14 UTC

[incubator-linkis-website] branch dev updated: 1.1.1 创建新引擎的文档与1.1.0保持一致 (#351)

This is an automated email from the ASF dual-hosted git repository.

leojie pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git


The following commit(s) were added to refs/heads/dev by this push:
     new 72161bf3c 1.1.1 创建新引擎的文档与1.1.0保持一致 (#351)
72161bf3c is described below

commit 72161bf3c7e781ad4cc1c3cabdaa1e947c48bb6f
Author: weixiao <le...@gmail.com>
AuthorDate: Sat Jun 18 10:24:09 2022 +0800

    1.1.1 创建新引擎的文档与1.1.0保持一致 (#351)
---
 .../version-1.1.1/development/new_engine_conn.md   | 529 ++++++++++++++++++---
 .../version-1.1.1/development/new_engine_conn.md   | 529 ++++++++++++++++++---
 2 files changed, 951 insertions(+), 107 deletions(-)

diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.1/development/new_engine_conn.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.1/development/new_engine_conn.md
index 6153df43d..260b57431 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.1/development/new_engine_conn.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.1/development/new_engine_conn.md
@@ -3,80 +3,503 @@ title: 如何实现一个新引擎
 sidebar_position: 3
 ---
 
+## 1. Linkis新引擎功能代码实现
+
 实现一个新的引擎其实就是实现一个新的EngineConnPlugin(ECP)引擎插件。具体步骤如下:
 
-1. 新建一个maven模块,并引入ECP的maven依赖:
-```
+### 1.1 新建一个maven模块,并引入ECP的maven依赖
+
+![maven依赖](/Images-zh/EngineConnNew/engine_jdbc_dependency.png)
+
+```xml
 <dependency>
-<groupId>org.apache.linkis</groupId>
-<artifactId>linkis-engineconn-plugin-core</artifactId>
-<version>${linkis.version}</version>
+	<groupId>org.apache.linkis</groupId>
+	<artifactId>linkis-engineconn-plugin-core</artifactId>
+	<version>${linkis.version}</version>
 </dependency>
+<!-- 以及一些其他所需依赖的maven配置 -->
 ```
-2. 实现ECP的主要接口:
 
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;a)EngineConnPlugin,启动EngineConn时,先找到对应的EngineConnPlugin类,以此为入口,获取其它核心接口的实现,是必须实现的主要接口。
-    
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b)EngineConnFactory,实现如何启动一个引擎连接器,和如何启动一个引擎执行器的逻辑,是必须实现的接口。
+### 1.2 实现ECP的主要接口
+
+- **EngineConnPlugin:** 启动EngineConn时,先找到对应的EngineConnPlugin类,以此为入口,获取其它核心接口的实现,是必须实现的主要接口。
 
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b.a 实现createEngineConn方法:返回一个EngineConn对象,其中,getEngine返回一个封装了与底层引擎连接信息的对象,同时包含Engine类型信息。
-    
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b.b 对于只支持单一计算场景的引擎,继承SingleExecutorEngineConnFactory,实现createExecutor,返回对应的Executor。
-    
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b.c 对于支持多计算场景的引擎,需要继承MultiExecutorEngineConnFactory,并为每种计算类型实现一个ExecutorFactory。EngineConnPlugin会通过反射获取所有的ExecutorFactory,根据实际情况返回对应的Executor。
-    
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;c)EngineConnResourceFactory,用于限定启动一个引擎所需要的资源,引擎启动前,将以此为依 据 向 Linkis Manager 申 请 资 源。非必须,默认可以使用GenericEngineResourceFactory。
+- **EngineConnFactory:** 实现如何启动一个引擎连接器,和如何启动一个引擎执行器的逻辑,是必须实现的接口。
+    - 实现createEngineConn方法:返回一个EngineConn对象,其中,getEngine返回一个封装了与底层引擎连接信息的对象,同时包含Engine类型信息。
+    - 对于只支持单一计算场景的引擎,继承SingleExecutorEngineConnFactory,实现createExecutor,返回对应的Executor。
+    - 对于支持多计算场景的引擎,需要继承MultiExecutorEngineConnFactory,并为每种计算类型实现一个ExecutorFactory。EngineConnPlugin会通过反射获取所有的ExecutorFactory,根据实际情况返回对应的Executor。
+- **EngineConnResourceFactory:** 用于限定启动一个引擎所需要的资源,引擎启动前,将以此为依 据 向 Linkis Manager 申 请 资 源。非必须,默认可以使用GenericEngineResourceFactory。
+- **EngineLaunchBuilder:** 用于封装EngineConnManager可以解析成启动命令的必要信息。非必须,可以直接继承JavaProcessEngineConnLaunchBuilder。
 
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;d)EngineLaunchBuilder,用于封装EngineConnManager可以解析成启动命令的必要信息。非必须,可以直接继承JavaProcessEngineConnLaunchBuilder。
+### 1.3 实现引擎Executor执行器逻辑
 
-3. 实现Executor。Executor为执行器,作为真正的计算场景执行器,是实际的计算逻辑执行单元,也对引擎各种具体能力的抽象,提供加锁、访问状态、获取日志等多种不同的服务。根据实际的使用需要,Linkis默认提供以下的派生Executor基类,其类名和主要作用如下:
+Executor为执行器,作为真正的计算场景执行器,是实际的计算逻辑执行单元,也是对引擎各种具体能力的抽象,提供加锁、访问状态、获取日志等多种不同的服务。并根据实际的使用需要,Linkis默认提供以下的派生Executor基类,其类名和主要作用如下:
 
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;a) SensibleExecutor:
-       
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; i. Executor存在多种状态,允许Executor切换状态
-         
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ii. Executor切换状态后,允许做通知等操作
-         
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b) YarnExecutor:指Yarn类型的引擎,能够获取得到applicationId和applicationURL和队列。
-       
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;c) ResourceExecutor: 指引擎具备资源动态变化的能力,配合提供requestExpectedResource方法,用于每次希望更改资源时,先向RM申请新的资源;而resourceUpdate方法,用于每次引擎实际使用资源发生变化时,向RM汇报资源情况。
-       
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;d) AccessibleExecutor:是一个非常重要的Executor基类。如果用户的Executor继承了该基类,则表示该Engine是可以被访问的。这里需区分SensibleExecutor的state()和 AccessibleExecutor 的 getEngineStatus()方法:state()用于获取引擎状态,getEngineStatus()会获取引擎的状态、负载、并发等基础指标Metric数据。
-       
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;e) 同时,如果继承了 AccessibleExecutor,会触发Engine 进程实例化多个EngineReceiver方法。EngineReceiver用于处理Entrance、EM和LinkisMaster的RPC请求,使得该引擎变成了一个可被访问的引擎,用户如果有特殊的RPC需求,可以通过实现RPCService接口,进而实现与AccessibleExecutor通信。
+- **SensibleExecutor:**
+    - Executor存在多种状态,允许Executor切换状态
+    - Executor切换状态后,允许做通知等操作
+- **YarnExecutor:** 指Yarn类型的引擎,能够获取得到applicationId和applicationURL和队列。
+- **ResourceExecutor:** 指引擎具备资源动态变化的能力,配合提供requestExpectedResource方法,用于每次希望更改资源时,先向RM申请新的资源;而resourceUpdate方法,用于每次引擎实际使用资源发生变化时,向RM汇报资源情况。
+- **AccessibleExecutor:** 是一个非常重要的Executor基类。如果用户的Executor继承了该基类,则表示该Engine是可以被访问的。这里需区分SensibleExecutor的state()和 AccessibleExecutor 的 getEngineStatus()方法:state()用于获取引擎状态,getEngineStatus()会获取引擎的状态、负载、并发等基础指标Metric数据。
+- 同时,如果继承了 AccessibleExecutor,会触发Engine 进程实例化多个EngineReceiver方法。EngineReceiver用于处理Entrance、EM和LinkisMaster的RPC请求,使得该引擎变成了一个可被访问的引擎,用户如果有特殊的RPC需求,可以通过实现RPCService接口,进而实现与AccessibleExecutor通信。
+- **ExecutableExecutor:** 是一个常驻型的Executor基类,常驻型的Executor包含:生产中心的Streaming应用、提交给Schedulis后指定要以独立模式运行的脚步、业务用户的业务应用等。
+- **StreamingExecutor:** Streaming为流式应用,继承自ExecutableExecutor,需具备诊断、do checkpoint、采集作业信息、监控告警的能力。
+- **ComputationExecutor:** 是常用的交互式引擎Executor,处理交互式执行任务,并且具备状态查询、任务kill等交互式能力。
+- **ConcurrentComputationExecutor:** 用户并发引擎Executor,常用于JDBC类型引擎,执行脚本时,由管理员账户拉起引擎实例,所有用户共享引擎实例。
 
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;f) ExecutableExecutor:是一个常驻型的Executor基类,常驻型的Executor包含:生产中心的Streaming应用、提交给Schedulis后指定要以独立模式运行的脚步、业务用户的业务应用等。
+## 2. 以JDBC引擎为例详解新引擎的实现步骤
 
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;g) StreamingExecutor:Streaming为流式应用,继承自ExecutableExecutor,需具备诊断、do checkpoint、采集作业信息、监控告警的能力。
+本章节以JDBC引擎举例,详解新引擎的实现过程,包括引擎代码的编译、安装、数据库配置,管理台引擎标签适配,以及Scripts中新引擎脚本类型扩展和工作流新引擎的任务节点扩展等。
 
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;h) ComputationExecutor:是常用的交互式引擎Executor,处理交互式执行任务,并且具备状态查询、任务kill等交互式能力。
+### 2.1 并发引擎设置默认启动用户
 
-             
-## 实际案例          
-以下将以Hive引擎为案例,说明各个接口的实现方式。下图就是实现一个Hive引擎所需要
-实现的所有核心类。
+JDBC引擎中的核心类`JDBCEngineConnExecutor`继承的抽象类是`ConcurrentComputationExecutor`,计算类引擎中的核心类`XXXEngineConnExecutor`继承的抽象类是`ComputationExecutor`。这导致两者最大的一个区别是:JDBC引擎实例由管理员用户启动,被所有用户共享,以提高机器资源利用率;而计算引擎类型的脚本在提交时,每个用户下会启动一个引擎实例,用户间引擎实例互相隔离。这个在此处暂不细说,因为无论是并发引擎还是计算引擎,下文提到的额外修改流程应是一致的。
 
-Hive引擎是一个交互式引擎,因此在实现Executor时,继承了ComputationExecutor,并做
-了以下maven依赖的引入:
+相应的,如果你新增的引擎是并发引擎,那么你需要关注下这个类:AMConfiguration.scala,如果你新增的引擎是计算类引擎,则可忽略。
 
+```scala
+object AMConfiguration {
+  // 如果你的引擎是多用户并发引擎,那么这个配置项需要关注下
+  val MULTI_USER_ENGINE_TYPES = CommonVars("wds.linkis.multi.user.engine.types", "jdbc,ck,es,io_file,appconn")
+  
+    private def getDefaultMultiEngineUser(): String = {
+    // 此处应该是为了设置并发引擎拉起时的启动用户,默认jvmUser即是引擎服务Java进程的启动用户
+    val jvmUser = Utils.getJvmUser
+    s"""{jdbc:"$jvmUser", presto: "$jvmUser" es: "$jvmUser", ck:"$jvmUser", appconn:"$jvmUser", io_file:"root"}"""
+  }
+}
 ```
-<dependency>
-<groupId>org.apache.linkis</groupId>
-<artifactId>linkis-computation-engineconn</artifactId>
-<version>${linkis.version}</version>
-</dependency>
+
+### 2.2 新引擎类型扩展
+
+实现`ComputationSingleExecutorEngineConnFactory`接口的类`JDBCEngineConnFactory`中,下面两个方法需要实现:
+
+```scala
+override protected def getEngineConnType: EngineType = EngineType.JDBC
+
+override protected def getRunType: RunType = RunType.JDBC
+```
+
+因此需要在EngineType和RunType中增加JDBC对应的变量。
+
+```scala
+// EngineType.scala中类似已存在引擎的变量定义,增加JDBC相关变量或代码
+object EngineType extends Enumeration with Logging {
+  val JDBC = Value("jdbc")
+}
+
+def mapStringToEngineType(str: String): EngineType = str match {
+  case _ if JDBC.toString.equalsIgnoreCase(str) => JDBC
+}
+
+// RunType.scla中
+object RunType extends Enumeration {
+	val JDBC = Value("jdbc")
+}
+```
+
+### 2.3 JDBC引擎标签中的版本号设置
+
+```scala
+// 在LabelCommonConfig中增加JDBC的version配置
+public class LabelCommonConfig {
+  public final static CommonVars<String> JDBC_ENGINE_VERSION = CommonVars.apply("wds.linkis.jdbc.engine.version", "4");
+}
+
+// 在EngineTypeLabelCreator的init方法中补充jdbc的匹配逻辑
+// 如果这一步不做,代码提交到引擎上时,引擎标签信息中会缺少版本号
+public class EngineTypeLabelCreator {
+	private static void init() {
+    defaultVersion.put(EngineType.JDBC().toString(), LabelCommonConfig.JDBC_ENGINE_VERSION.getValue());
+  }
+}
+```
+
+### 2.4 允许脚本编辑器打开的脚本文件类型
+
+在FileSource.scala中的fileType数组中增加jdbc引擎的脚本类型,如果不加,Scripts文件列表中不允许打开JDBC引擎的脚本类型
+
+```scala
+// FileSource.scala中
+object FileSource {
+    private val fileType = Array("......", "jdbc")
+}
+```
+
+### 2.5 配置JDBC脚本变量存储和解析
+
+如果这个操作不做,JDBC的脚本中变量不能被正常存储和解析,脚本中直接使用${变量}时代码会执行失败!
+
+![变量解析](/Images-zh/EngineConnNew/variable_resolution.png)
+
+```scala
+// QLScriptCompaction.scala
+class QLScriptCompaction private extends CommonScriptCompaction{
+    override def belongTo(suffix: String): Boolean = {
+    suffix match {
+      ...
+      case "jdbc" => true
+      case _ => false
+    }
+  }
+}
+
+// QLScriptParser.scala
+class QLScriptParser private extends CommonScriptParser {
+  override def belongTo(suffix: String): Boolean = {
+    suffix match {
+      case "jdbc" => true
+      case _ => false
+    }
+  }
+}
+
+// CustomVariableUtils.scala中
+object CustomVariableUtils extends Logging {
+   def replaceCustomVar(jobRequest: JobRequest, runType: String): (Boolean, String) = {
+    runType match {
+      ......
+      case "hql" | "sql" | "fql" | "jdbc" | "hive"| "psql" => codeType = SQL_TYPE
+      case _ => return (false, code)
+    }
+   }
+}
+```
+
+### 2.6 Linkis管理员台界面引擎管理器中加入JDBC引擎文字提示或图标
+
+web/src/dss/module/resourceSimple/engine.vue
+
+```js
+methods: {
+  calssifyName(params) {
+     switch (params) {
+        case 'jdbc':
+          return 'JDBC';
+        ......
+     }
+  }
+  // 图标过滤
+  supportIcon(item) {
+     const supportTypes = [
+       	 ......
+        { rule: 'jdbc', logo: 'fi-jdbc' },
+      ];
+  }
+}
+```
+
+
+
+最终呈现给用户的效果:
+
+![JDBC类型引擎](/Images-zh/EngineConnNew/jdbc_engine_view.png)
+
+### 2.7 JDBC引擎的编译打包和安装部署
+
+JDBC引擎模块编译的示例命令如下:
+
+```shell
+cd /linkis-project/linkis-engineconn-plugins/engineconn-plugins/jdbc
+
+mvn clean install -DskipTests
+```
+
+编译完整项目时,新增引擎默认不会加到最终的tar.gz压缩包中,如果需要,请修改如下文件:
+
+assembly-combined-package/assembly-combined/src/main/assembly/assembly.xml
+
+```xml
+<!--jdbc-->
+<fileSets>
+  ......
+  <fileSet>
+      <directory>
+          ../../linkis-engineconn-plugins/engineconn-plugins/jdbc/target/out/
+      </directory>
+      <outputDirectory>lib/linkis-engineconn-plugins/</outputDirectory>
+      <includes>
+          <include>**/*</include>
+      </includes>
+  </fileSet>
+</fileSets>
+```
+
+然后对在项目根目录运行编译命令:
+
+```shell
+mvn clean install -DskipTests
+```
+
+编译成功后在assembly-combined-package/target/apache-linkis-1.x.x-incubating-bin.tar.gz和linkis-engineconn-plugins/engineconn-plugins/jdbc/target/目录下找到out.zip。
+
+上传out.zip文件到Linkis的部署节点,解压缩到Linkis安装目录/lib/linkis-engineconn-plugins/下面:
+
+![引擎安装](/Images-zh/EngineConnNew/engine_set_up.png)
+
+解压后别忘记删除out.zip,至此引擎编译和安装完成。
+
+### 2.8 JDBC引擎数据库配置
+
+在管理台选择添加引擎
+
+![添加引擎](/Images-zh/EngineConnNew/add_engine_conf.png)
+
+
+
+如果您希望在管理台支持引擎参数配置,可以按照JDBC引擎SQL示例修改数据库。
+
+此处以JDBC引擎为例,引擎安装完之后,要想运行新的引擎代码,还需对引擎进行数据库配置,以JDBC引擎为例,按照你自己实现的新引擎的情况,请按需修改。
+
+SQL参考如下:
+
+```sql
+SET @JDBC_LABEL="jdbc-4";
+
+SET @JDBC_ALL=CONCAT('*-*,',@JDBC_LABEL);
+SET @JDBC_IDE=CONCAT('*-IDE,',@JDBC_LABEL);
+SET @JDBC_NODE=CONCAT('*-nodeexecution,',@JDBC_LABEL);
+
+-- JDBC
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.rm.instance', '范围:1-20,单位:个', 'jdbc引擎最大并发数', '2', 'NumInterval', '[1,20]', '0', '0', '1', '队列资源', 'jdbc');
+
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.driver', '取值范围:对应JDBC驱动名称', 'jdbc驱动名称','', 'None', '', '0', '0', '1', '数据源配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.connect.url', '例如:jdbc:hive2://127.0.0.1:10000', 'jdbc连接地址', 'jdbc:hive2://127.0.0.1:10000', 'None', '', '0', '0', '1', '数据源配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.version', '取值范围:jdbc3,jdbc4', 'jdbc版本','jdbc4', 'OFT', '[\"jdbc3\",\"jdbc4\"]', '0', '0', '1', '数据源配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.connect.max', '范围:1-20,单位:个', 'jdbc引擎最大连接数', '10', 'NumInterval', '[1,20]', '0', '0', '1', '数据源配置', 'jdbc');
+
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.auth.type', '取值范围:SIMPLE,USERNAME,KERBEROS', 'jdbc认证方式', 'USERNAME', 'OFT', '[\"SIMPLE\",\"USERNAME\",\"KERBEROS\"]', '0', '0', '1', '用户配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.username', 'username', '数据库连接用户名', '', 'None', '', '0', '0', '1', '用户配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.password', 'password', '数据库连接密码', '', 'None', '', '0', '0', '1', '用户配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.principal', '例如:hadoop/host@KDC.COM', '用户principal', '', 'None', '', '0', '0', '1', '用户配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.keytab.location', '例如:/data/keytab/hadoop.keytab', '用户keytab文件路径', '', 'None', '', '0', '0', '1', '用户配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.proxy.user.property', '例如:hive.server2.proxy.user', '用户代理配置', '', 'None', '', '0', '0', '1', '用户配置', 'jdbc');
+
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.engineconn.java.driver.cores', '取值范围:1-8,单位:个', 'jdbc引擎初始化核心个数', '1', 'NumInterval', '[1,8]', '0', '0', '1', 'jdbc引擎设置', 'jdbc');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.engineconn.java.driver.memory', '取值范围:1-8,单位:G', 'jdbc引擎初始化内存大小', '1g', 'Regex', '^([1-8])(G|g)$', '0', '0', '1', 'jdbc引擎设置', 'jdbc');
+
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType',@JDBC_ALL, 'OPTIONAL', 2, now(), now());
+
+insert into `linkis_ps_configuration_key_engine_relation` (`config_key_id`, `engine_type_label_id`)
+    (select config.id as `config_key_id`, label.id AS `engine_type_label_id` FROM linkis_ps_configuration_config_key config INNER JOIN linkis_cg_manager_label label ON config.engine_conn_type = 'jdbc' and label_value = @JDBC_ALL);
+
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType',@JDBC_IDE, 'OPTIONAL', 2, now(), now());
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType',@JDBC_NODE, 'OPTIONAL', 2, now(), now());
+
+
+
+select @label_id := id from linkis_cg_manager_label where `label_value` = @JDBC_IDE;
+insert into linkis_ps_configuration_category (`label_id`, `level`) VALUES (@label_id, 2);
+
+select @label_id := id from linkis_cg_manager_label where `label_value` = @JDBC_NODE;
+insert into linkis_ps_configuration_category (`label_id`, `level`) VALUES (@label_id, 2);
+
+
+-- jdbc default configuration
+insert into `linkis_ps_configuration_config_value` (`config_key_id`, `config_value`, `config_label_id`)
+    (select `relation`.`config_key_id` AS `config_key_id`, '' AS `config_value`, `relation`.`engine_type_label_id` AS `config_label_id` FROM linkis_ps_configuration_key_engine_relation relation INNER JOIN linkis_cg_manager_label label ON relation.engine_type_label_id = label.id AND label.label_value = @JDBC_ALL);
+```
+
+如果想重置引擎的数据库配置数据,参考文件如下,请按需进行修改使用:
+
+```sql
+-- 清除jdbc引擎的初始化数据
+SET @JDBC_LABEL="jdbc-4";
+
+SET @JDBC_ALL=CONCAT('*-*,',@JDBC_LABEL);
+SET @JDBC_IDE=CONCAT('*-IDE,',@JDBC_LABEL);
+SET @JDBC_NODE=CONCAT('*-nodeexecution,',@JDBC_LABEL);
+
+delete from `linkis_ps_configuration_config_value` where `config_label_id` in
+                                                           (select `relation`.`engine_type_label_id` AS `config_label_id` FROM `linkis_ps_configuration_key_engine_relation` relation INNER JOIN `linkis_cg_manager_label` label ON relation.engine_type_label_id = label.id AND label.label_value = @JDBC_ALL);
+
+
+delete from `linkis_ps_configuration_key_engine_relation`
+where `engine_type_label_id` in
+      (select label.id FROM `linkis_ps_configuration_config_key` config
+          INNER JOIN `linkis_cg_manager_label` label
+              ON config.engine_conn_type = 'jdbc' and label_value = @JDBC_ALL);
+
+
+delete from `linkis_ps_configuration_category`
+where `label_id` in (select id from `linkis_cg_manager_label` where `label_value` in(@JDBC_IDE, @JDBC_NODE));
+
+
+delete from `linkis_ps_configuration_config_key` where `engine_conn_type` = 'jdbc';
+
+delete from `linkis_cg_manager_label` where `label_value` in (@JDBC_ALL, @JDBC_IDE, @JDBC_NODE);
+
+```
+
+最终的效果:
+
+![JDBC引擎](/Images-zh/EngineConnNew/jdbc_engine_conf_detail.png)
+
+这样配置完之后,linkis-cli以及Scripts提交引擎脚本时,才能正确匹配到引擎的标签信息和数据源的连接信息,然后才能拉起你新加的引擎。
+
+### 2.9 DSS Scripts中新增JDBC脚本类型以及图标等信息
+
+如果你使用到了DSS的Scripts功能,还需要对dss项目中web的前端文件进行一些小小的改动,改动的目的是为了在Scripts中支持新建、打开、执行JDBC引擎脚本类型,以及实现引擎对应的图标、字体等。
+
+#### 2.9.1 scriptis.js
+
+web/src/common/config/scriptis.js
+
+```js
+{
+  rule: /\.jdbc$/i,
+  lang: 'hql',
+  executable: true,
+  application: 'jdbc',
+  runType: 'jdbc',
+  ext: '.jdbc',
+  scriptType: 'jdbc',
+  abbr: 'jdbc',
+  logo: 'fi-jdbc',
+  color: '#444444',
+  isCanBeNew: true,
+  label: 'JDBC',
+  isCanBeOpen: true
+},
+```
+
+#### 2.9.2 脚本复制支持
+
+web/src/apps/scriptis/module/workSidebar/workSidebar.vue
+
+```js
+copyName() {
+  let typeArr = ['......', 'jdbc']
+}
+```
+
+#### 2.9.3 logo与字体配色
+
+web/src/apps/scriptis/module/workbench/title.vue
+
+```js
+  data() {
+    return {
+      isHover: false,
+      iconColor: {
+        'fi-jdbc': '#444444',
+      },
+    };
+  },
+```
+
+web/src/apps/scriptis/module/workbench/modal.js
+
+```js
+let logoList = [
+  { rule: /\.jdbc$/i, logo: 'fi-jdbc' },
+];
+```
+
+web/src/components/tree/support.js
+
+```js
+export const supportTypes = [
+  // 这里大概没用到
+  { rule: /\.jdbc$/i, logo: 'fi-jdbc' },
+]
+```
+
+引擎图标展示
+
+web/src/dss/module/resourceSimple/engine.vue
+
+```js
+methods: {
+  calssifyName(params) {
+     switch (params) {
+        case 'jdbc':
+          return 'JDBC';
+        ......
+     }
+  }
+  // 图标过滤
+  supportIcon(item) {
+     const supportTypes = [
+				......
+        { rule: 'jdbc', logo: 'fi-jdbc' },
+      ];
+  }
+}
+```
+
+web/src/dss/assets/projectIconFont/iconfont.css
+
+```css
+.fi-jdbc:before {
+  content: "\e75e";
+}
+```
+
+此处控制的应该是:
+
+![引擎图标](/Images-zh/EngineConnNew/jdbc_engine_logo.png)
+
+
+
+找一个引擎图标的svg文件
+
+web/src/components/svgIcon/svg/fi-jdbc.svg
+
+如果新引擎后续需要贡献社区,那么新引擎对应的svg图标、字体等需要确认其所属的开源协议,或获取其版权许可。
+
+### 2.10 DSS的工作流适配
+
+最终达成的效果:
+
+![工作流适配](/Images-zh/EngineConnNew/jdbc_job_flow.png)
+
+在dss_workflow_node表中保存新加JDBC引擎的定义数据,参考SQL:
+
+```sql
+# 引擎任务节点基本信息定义
+insert into `dss_workflow_node` (`id`, `name`, `appconn_name`, `node_type`, `jump_url`, `support_jump`, `submit_to_scheduler`, `enable_copy`, `should_creation_before_node`, `icon`) values('18','jdbc','-1','linkis.jdbc.jdbc',NULL,'1','1','1','0','svg文件');
+
+# svg文件对应新引擎任务节点图标
+
+# 引擎任务节点分类划分
+insert  into `dss_workflow_node_to_group`(`node_id`,`group_id`) values (18, 2);
+
+# 引擎任务节点的基本信息(参数属性)绑定
+INSERT  INTO `dss_workflow_node_to_ui`(`workflow_node_id`,`ui_id`) VALUES (18,45);
+
+# 在dss_workflow_node_ui表中定义了引擎任务节点相关的基本信息,然后以表单的形式在上图右侧中展示,你可以为新引擎扩充其他基础信息,然后自动被右侧表单渲染。
 ```
-             
-作为ComputationExecutor的子类,HiveEngineConnExecutor实现了executeLine方法,该方法接收一行执行语句,调用Hive的接口进行执行后,返回不同的ExecuteResponse表示成功或失败。同时在该方法中,通过参engineExecutorContext中提供的接口,实现了结果集、日志和进度的传输。
 
-Hive 引擎只需要执行 HQL 的 Executor,是一个单一执行器的引擎,因此,在定义HiveEngineConnFactory时,继承的是SingleExecutorEngineConnFactory,实现了以下两个接口:
-a) createEngineConn:创建了一个包含 UserGroupInformation、SessionState 和HiveConf的对象,作为与底层引擎的连接信息的封装,set到EngineConn对象中返回。
-b) createExecutor:根据当前的引擎连接信息,创建一个HiveEngineConnExecutor执行器对象。
+web/src/apps/workflows/service/nodeType.js
+
+```js
+import jdbc from '../module/process/images/newIcon/jdbc.svg';
+
+const NODETYPE = {
+  ......
+  JDBC: 'linkis.jdbc.jdbc',
+}
+
+const ext = {
+	......
+  [NODETYPE.JDBC]: 'jdbc',
+}
+
+const NODEICON = {
+  [NODETYPE.JDBC]: {
+    icon: jdbc,
+    class: {'jdbc': true}
+  },
+}
+```
 
-Hive引擎是一个普通的Java进程,因此在实现EngineConnLaunchBuilder时,直接继承了JavaProcessEngineConnLaunchBuilder。像内存大小、Java参数和classPath,可以通过配置进行调整,具体参考EnvConfiguration类。
+在web/src/apps/workflows/module/process/images/newIcon/目录下增加新引擎的图标
 
-Hive引擎使用的是LoadInstanceResource资源,因此不需要实现EngineResourceFactory,直接使用默认的GenericEngineResourceFactory,通过配置调整资源的数量,具体参考EngineConnPluginConf类。
+web/src/apps/workflows/module/process/images/newIcon/jdbc
 
-实现HiveEngineConnPlugin,提供以上实现类的创建方法。
+同样贡献社区时,请考虑svg文件的lincese或版权。
 
+## 3. 本章小结
 
+上述内容记录了新引擎的实现流程,以及额外需要做的一些引擎配置。目前,一个新引擎的扩展流程还是比较繁琐的,希望能在后续版本中,优化新引擎的扩展、以及安装等过程。
diff --git a/versioned_docs/version-1.1.1/development/new_engine_conn.md b/versioned_docs/version-1.1.1/development/new_engine_conn.md
index 1535d3836..fdaa1faa6 100644
--- a/versioned_docs/version-1.1.1/development/new_engine_conn.md
+++ b/versioned_docs/version-1.1.1/development/new_engine_conn.md
@@ -3,80 +3,501 @@ title: How To Quickly Implement A New Engine
 sidebar_position: 3
 ---
 
-## How To Quickly Implement A New Engine
+## 1. Linkis new engine function code implementation
 
-To implement a new engine is to implement a new "EngineConnPlugin(ECP)" means engine plugin. Specific steps are as follows: 
+Implementing a new engine is actually implementing a new EngineConnPlugin (ECP) engine plugin. Specific steps are as follows:
 
-1.Create a new maven module and introduce the maven dependency of "ECP":
-```
+### 1.1 Create a new maven module and introduce the maven dependency of ECP
+
+![maven依赖](/Images/EngineConnNew/engine_jdbc_dependency.png)
+
+```xml
 <dependency>
-<groupId>org.apache.linkis</groupId>
-<artifactId>linkis-engineconn-plugin-core</artifactId>
-<version>${linkis.version}</version>
+	<groupId>org.apache.linkis</groupId>
+	<artifactId>linkis-engineconn-plugin-core</artifactId>
+	<version>${linkis.version}</version>
 </dependency>
+<!-- and some other required maven configurations -->
 ```
-2.The main interfaces of implementing "ECP":
 
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;a)EngineConnPlugin, when starting "EngineConn", first find the corresponding "EngineConnPlugin" class, and use this as the entry point to obtain the implementation of other core interfaces, which is the main interface that must be implemented.
-    
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b)EngineConnFactory, which implements the logic of how to start an engine connector and how to start an engine executor, is an interface that must be implemented.
+### 1.2 Implement the main interface of ECP
 
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b.a Implement the "createEngineConn" method: return an "EngineConn" object, where "getEngine" returns an object that encapsulates the connection information with the underlying engine, and also contains Engine type information.
-    
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b.b For engines that only support a single computing scenario, inherit "SingleExecutorEngineConnFactory" class and implement "createExecutor" method which returns the corresponding Executor.
-    
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b.c For engines that support multiple computing scenarios, you need to inherit "MultiExecutorEngineConnFactory" and implement an ExecutorFactory for each computing type. "EngineConnPlugin" will obtain all ExecutorFactory through reflection and return the corresponding Executor according to the actual situation.
-    
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;c)EngineConnResourceFactory, it is used to limit the resources required to start an engine. Before the engine starts, it will use this as the basis to apply for resources from the "Linkis Manager". Not required, "GenericEngineResourceFactory" can be used by default.
+- **EngineConnPlugin:** When starting EngineConn, first find the corresponding EngineConnPlugin class, and use this as the entry point to obtain the implementation of other core interfaces, which is the main interface that must be implemented.
 
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;d)EngineLaunchBuilder, it is used to encapsulate the necessary information that "EngineConnManager" can parse into the startup command. Not necessary, you can directly inherit "JavaProcessEngineConnLaunchBuilder".
+- **EngineConnFactory:** Implementing the logic of how to start an engine connector and how to start an engine executor is an interface that must be implemented.
+    - Implement the createEngineConn method: return an EngineConn object, where getEngine returns an object that encapsulates the connection information with the underlying engine, and also contains the Engine type information.
+    - For engines that only support a single computing scenario, inherit SingleExecutorEngineConnFactory, implement createExecutor, and return the corresponding Executor.
+    - For engines that support multi-computing scenarios, you need to inherit MultiExecutorEngineConnFactory and implement an ExecutorFactory for each computation type. EngineConnPlugin will obtain all ExecutorFactory through reflection, and return the corresponding Executor according to the actual situation.
+- **EngineConnResourceFactory:** It is used to limit the resources required to start an engine. Before the engine starts, it will apply for resources from Linkis Manager based on this. Not required, GenericEngineResourceFactory can be used by default.
+- **EngineLaunchBuilder:** It is used to encapsulate the necessary information that EngineConnManager can parse into startup commands. Not required, you can directly inherit JavaProcessEngineConnLaunchBuilder.
 
-3.Implement Executor. As a real computing scene executor, Executor is the actual computing logic execution unit. It also abstracts various specific capabilities of the engine and provides various services such as locking, accessing status and obtaining logs. According to actual needs, Linkis provides the following derived Executor base classes by default. The class names and main functions are as follows:
+### 1.3 Implement the engine Executor executor logic
 
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;a) SensibleExecutor: 
-       
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; i. Executor has multiple states, allowing Executor to switch states.
-         
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ii. After the Executor switches the state, operations such as notifications are allowed. 
-         
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b) YarnExecutor: refers to the Yarn type engine, which can obtain the "applicationId", "applicationURL" and queue。
-       
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;c) ResourceExecutor: refers to the engine's ability to dynamically change resources and cooperate with the "requestExpectedResource" method to apply to RM for new resources each time you want to change resources; And the "resourceUpdate" method is used to request new resources from RM each time the actual resource used by the engine changes:
-       
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;d) AccessibleExecutor: is a very important Executor base class. If the user's Executor inherits the base class, it means that the Engine can be accessed. Here we need to distinguish between "SensibleExecutor"'s "state" method and "AccessibleExecutor"'s "getEngineStatus" method. "state" method is used to get the engine status, and "getEngineStatus" is used to get the basic indicator metric data such as engine status, load and concurrency.
-       
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;e) At the same time, if AccessibleExecutor is inherited, it will trigger the Engine process to instantiate multiple "EngineReceiver" methods. "EngineReceiver" is used to process RPC requests from Entrance, EM and "LinkisMaster", marking the engine an accessible engine. If users have special RPC requirements, they can communicate with "AccessibleExecutor" by implementing the "RPCService" interface. 
+Executor is an executor. As a real computing scene executor, it is an actual computing logic execution unit and an abstraction of various specific capabilities of the engine. It provides various services such as locking, accessing status, and obtaining logs. And according to the actual needs, Linkis provides the following derived Executor base classes by default. The class names and main functions are as follows:
 
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;f) ExecutableExecutor: it is a resident Executor base class. The resident Executor includes: Streaming applications in the production center, steps specified to run in independent mode after submission to "Schedulis", business applications of business users, etc.
+- **SensibleExecutor:**
+    - Executor has multiple states, allowing Executor to switch states
+    - After the Executor switches states, operations such as notifications are allowed
+- **YarnExecutor:** Refers to the Yarn type engine, which can obtain applicationId, applicationURL and queue.
+- **ResourceExecutor:** means that the engine has the ability to dynamically change resources, and provides the requestExpectedResource method, which is used to apply for a new resource from the RM every time you want to change the resource; and the resourceUpdate method, which is used each time the engine actually uses the resource When changes occur, report the resource situation to RM.
+- **AccessibleExecutor:** is a very important Executor base class. If the user's Executor inherits this base class, it means that the Engine can be accessed. Here, it is necessary to distinguish between the state() of SensibleExecutor and the getEngineStatus() method of AccessibleExecutor: state() is used to obtain the engine status, and getEngineStatus() will obtain the Metric data of basic indicators such as the status, load, and concurrency of the engine.
+- At the same time, if AccessibleExecutor is inherited, the Engine process will be triggered to instantiate multiple EngineReceiver methods. EngineReceiver is used to process RPC requests from Entrance, EM and LinkisMaster, making the engine an accessible engine. If users have special RPC requirements, they can communicate with AccessibleExecutor by implementing the RPCService interface.
+- **ExecutableExecutor:** is a resident Executor base class. The resident Executor includes: Streaming application in the production center, steps specified to run in independent mode after being submitted to Schedulelis, business applications for business users, etc.
+- **StreamingExecutor:** Streaming is a streaming application, inherited from ExecutableExecutor, and needs to have the ability to diagnose, do checkpoint, collect job information, and monitor alarms.
+- **ComputationExecutor:** is a commonly used interactive engine Executor, which handles interactive execution tasks and has interactive capabilities such as status query and task kill.
+- **ConcurrentComputationExecutor:** User concurrent engine Executor, commonly used in JDBC type engines. When executing scripts, the administrator account starts the engine instance, and all users share the engine instance.
 
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;g) StreamingExecutor: inherited from "ExecutiveExecutor", it needs the ability to diagnose, do checkpoint, collect job information and monitor alarms.
+## 2. Take the JDBC engine as an example to explain the implementation steps of the new engine in detail
 
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;h) ComputationExecutor: it is a commonly used interactive engine Executor which handles interactive execution tasks and has interactive capabilities such as status query ad task killing.
+This chapter takes the JDBC engine as an example to explain the implementation process of the new engine in detail, including engine code compilation, installation, database configuration, management console engine label adaptation, and the new engine script type extension in Scripts and the task node extension of the new workflow engine, etc. .
 
-             
-## Actual Case         
-The following will take the Hive engine as case to illustrate the implementation of each interface. The following figure is what is needed to implement a Hive engine All core classes implemented.
+### 2.1 Concurrency engine setting default startup user
 
-Hive engine is an interactive engine, so when implementing Executor, it inherits "ComputationExecutor" and introduces the following maven dependencies: 
+The abstract class inherited from the core class `JDBCEngineConnExecutor` in the JDBC engine is `ConcurrentComputationExecutor`, and the abstract class inherited from the core class `XXXEngineConnExecutor` in the calculation engine is `ComputationExecutor`. This leads to the biggest difference between the two: the JDBC engine instance is started by the administrator user and shared by all users to improve the utilization of machine resources; while the script of the computing engine type [...]
+
+Correspondingly, if your new engine is a concurrent engine, then you need to pay attention to this class: AMConfiguration.scala, if your new engine is a computing engine, you can ignore it.
+
+```scala
+object AMConfiguration {
+  // If your engine is a multi-user concurrent engine, then this configuration item needs to be paid attention to
+  val MULTI_USER_ENGINE_TYPES = CommonVars("wds.linkis.multi.user.engine.types", "jdbc,ck,es,io_file,appconn")
+  
+    private def getDefaultMultiEngineUser(): String = {
+    // This should be to set the startup user when the concurrent engine is pulled up. The default jvmUser is the startup user of the engine service Java process.
+    val jvmUser = Utils.getJvmUser
+    s"""{jdbc:"$jvmUser", presto: "$jvmUser" es: "$jvmUser", ck:"$jvmUser", appconn:"$jvmUser", io_file:"root"}"""
+  }
+}
+```
+
+### 2.2 New engine type extension
+
+In the class `JDBCEngineConnFactory` that implements the `ComputationSingleExecutorEngineConnFactory` interface, the following two methods need to be implemented:
+
+```scala
+override protected def getEngineConnType: EngineType = EngineType.JDBC
+
+override protected def getRunType: RunType = RunType.JDBC
+```
+
+Therefore, it is necessary to add variables corresponding to JDBC in EngineType and RunType.
+
+```scala
+// EngineType.scala is similar to the variable definition of the existing engine, adding JDBC related variables or code
+object EngineType extends Enumeration with Logging {
+  val JDBC = Value("jdbc")
+}
+
+def mapStringToEngineType(str: String): EngineType = str match {
+  case _ if JDBC.toString.equalsIgnoreCase(str) => JDBC
+}
+
+// RunType.scla中
+object RunType extends Enumeration {
+	val JDBC = Value("jdbc")
+}
+```
+
+### 2.3 Version number settings in the JDBC engine tab
+
+```scala
+// Add the version configuration of JDBC in LabelCommonConfig
+public class LabelCommonConfig {
+   public final static CommonVars<String> JDBC_ENGINE_VERSION = CommonVars.apply("wds.linkis.jdbc.engine.version", "4");
+}
+
+// Supplement the matching logic of jdbc in the init method of EngineTypeLabelCreator
+// If this step is not done, when the code is submitted to the engine, the version number will be missing from the engine tag information
+public class EngineTypeLabelCreator {
+private static void init() {
+     defaultVersion.put(EngineType.JDBC().toString(), LabelCommonConfig.JDBC_ENGINE_VERSION.getValue());
+   }
+}
+````
+
+### 2.4 Types of script files that are allowed to be opened by the script editor
+
+Add the script type of the jdbc engine to the fileType array in FileSource.scala. If it is not added, the script type of the JDBC engine is not allowed to be opened in the Scripts file list.
+
+```scala
+// FileSource.scala
+object FileSource {
+     private val fileType = Array("...", "jdbc")
+}
+````
+
+### 2.5 Configure JDBC script variable storage and parsing
+
+If this operation is not done, the variables in the JDBC script cannot be stored and parsed normally, and the code execution will fail when ${variable} is directly used in the script!
+
+![变量解析](/Images/EngineConnNew/variable_resolution.png)
+
+```scala
+// QLScriptCompaction.scala
+class QLScriptCompaction private extends CommonScriptCompaction{
+    override def belongTo(suffix: String): Boolean = {
+    suffix match {
+      ...
+      case "jdbc" => true
+      case _ => false
+    }
+  }
+}
+
+// QLScriptParser.scala
+class QLScriptParser private extends CommonScriptParser {
+  override def belongTo(suffix: String): Boolean = {
+    suffix match {
+      case "jdbc" => true
+      case _ => false
+    }
+  }
+}
+
+// CustomVariableUtils.scala中
+object CustomVariableUtils extends Logging {
+   def replaceCustomVar(jobRequest: JobRequest, runType: String): (Boolean, String) = {
+    runType match {
+      ......
+      case "hql" | "sql" | "fql" | "jdbc" | "hive"| "psql" => codeType = SQL_TYPE
+      case _ => return (false, code)
+    }
+   }
+}
+```
+
+### 2.6 Add JDBC engine text prompts or icons to the Linkis administrator console interface engine manager
+
+web/src/dss/module/resourceSimple/engine.vue
+
+```js
+methods: {
+  calssifyName(params) {
+     switch (params) {
+        case 'jdbc':
+          return 'JDBC';
+        ......
+     }
+  }
+  // 图标过滤
+  supportIcon(item) {
+     const supportTypes = [
+       	 ......
+        { rule: 'jdbc', logo: 'fi-jdbc' },
+      ];
+  }
+}
+```
+
+The final effect presented to the user:
+
+![JDBC类型引擎](/Images/EngineConnNew/jdbc_engine_view.png)
+
+### 2.7 Compile, package, install and deploy the JDBC engine
+
+An example command for JDBC engine module compilation is as follows:
+
+```shell
+cd /linkis-project/linkis-engineconn-plugins/engineconn-plugins/jdbc
+
+mvn clean install -DskipTests
+````
+
+When compiling a complete project, the new engine will not be added to the final tar.gz archive by default. If necessary, please modify the following files:
+
+assembly-combined-package/assembly-combined/src/main/assembly/assembly.xml
+
+```xml
+<!--jdbc-->
+<fileSets>
+  ......
+  <fileSet>
+      <directory>
+          ../../linkis-engineconn-plugins/engineconn-plugins/jdbc/target/out/
+      </directory>
+      <outputDirectory>lib/linkis-engineconn-plugins/</outputDirectory>
+      <includes>
+          <include>**/*</include>
+      </includes>
+  </fileSet>
+</fileSets>
+```
+
+Then run the compile command in the project root directory:
+
+```shell
+mvn clean install -DskipTests
+````
+
+After successful compilation, find out.zip in the directories of assembly-combined-package/target/apache-linkis-1.x.x-incubating-bin.tar.gz and linkis-engineconn-plugins/engineconn-plugins/jdbc/target/.
+
+Upload the out.zip file to the Linkis deployment node and extract it to the Linkis installation directory /lib/linkis-engineconn-plugins/:
+
+![引擎安装](/Images/EngineConnNew/engine_set_up.png)
+
+Don't forget to delete out.zip after decompression, so far the engine compilation and installation are completed.
+
+### 2.8 JDBC engine database configuration
+
+Select Add Engine in the console
+
+![添加引擎](/Images/EngineConnNew/add_engine_conf.png)
+
+
+If you want to support engine parameter configuration on the management console, you can modify the database according to the JDBC engine SQL example.
+
+The JDBC engine is used here as an example. After the engine is installed, if you want to run the new engine code, you need to configure the database of the engine. Take the JDBC engine as an example, please modify it according to the situation of the new engine you implemented yourself.
+
+The SQL reference is as follows:
+
+```sql
+SET @JDBC_LABEL="jdbc-4";
+
+SET @JDBC_ALL=CONCAT('*-*,',@JDBC_LABEL);
+SET @JDBC_IDE=CONCAT('*-IDE,',@JDBC_LABEL);
+SET @JDBC_NODE=CONCAT('*-nodeexecution,',@JDBC_LABEL);
+
+-- JDBC
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.rm.instance', '范围:1-20,单位:个', 'jdbc引擎最大并发数', '2', 'NumInterval', '[1,20]', '0', '0', '1', '队列资源', 'jdbc');
+
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.driver', '取值范围:对应JDBC驱动名称', 'jdbc驱动名称','', 'None', '', '0', '0', '1', '数据源配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.connect.url', '例如:jdbc:hive2://127.0.0.1:10000', 'jdbc连接地址', 'jdbc:hive2://127.0.0.1:10000', 'None', '', '0', '0', '1', '数据源配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.version', '取值范围:jdbc3,jdbc4', 'jdbc版本','jdbc4', 'OFT', '[\"jdbc3\",\"jdbc4\"]', '0', '0', '1', '数据源配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.connect.max', '范围:1-20,单位:个', 'jdbc引擎最大连接数', '10', 'NumInterval', '[1,20]', '0', '0', '1', '数据源配置', 'jdbc');
+
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.auth.type', '取值范围:SIMPLE,USERNAME,KERBEROS', 'jdbc认证方式', 'USERNAME', 'OFT', '[\"SIMPLE\",\"USERNAME\",\"KERBEROS\"]', '0', '0', '1', '用户配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.username', 'username', '数据库连接用户名', '', 'None', '', '0', '0', '1', '用户配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.password', 'password', '数据库连接密码', '', 'None', '', '0', '0', '1', '用户配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.principal', '例如:hadoop/host@KDC.COM', '用户principal', '', 'None', '', '0', '0', '1', '用户配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.keytab.location', '例如:/data/keytab/hadoop.keytab', '用户keytab文件路径', '', 'None', '', '0', '0', '1', '用户配置', 'jdbc');
+insert into `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.jdbc.proxy.user.property', '例如:hive.server2.proxy.user', '用户代理配置', '', 'None', '', '0', '0', '1', '用户配置', 'jdbc');
+
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.engineconn.java.driver.cores', '取值范围:1-8,单位:个', 'jdbc引擎初始化核心个数', '1', 'NumInterval', '[1,8]', '0', '0', '1', 'jdbc引擎设置', 'jdbc');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, `name`, `default_value`, `validate_type`, `validate_range`, `is_hidden`, `is_advanced`, `level`, `treeName`, `engine_conn_type`) VALUES ('wds.linkis.engineconn.java.driver.memory', '取值范围:1-8,单位:G', 'jdbc引擎初始化内存大小', '1g', 'Regex', '^([1-8])(G|g)$', '0', '0', '1', 'jdbc引擎设置', 'jdbc');
+
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType',@JDBC_ALL, 'OPTIONAL', 2, now(), now());
+
+insert into `linkis_ps_configuration_key_engine_relation` (`config_key_id`, `engine_type_label_id`)
+    (select config.id as `config_key_id`, label.id AS `engine_type_label_id` FROM linkis_ps_configuration_config_key config INNER JOIN linkis_cg_manager_label label ON config.engine_conn_type = 'jdbc' and label_value = @JDBC_ALL);
+
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType',@JDBC_IDE, 'OPTIONAL', 2, now(), now());
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, `label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES ('combined_userCreator_engineType',@JDBC_NODE, 'OPTIONAL', 2, now(), now());
+
+
+
+select @label_id := id from linkis_cg_manager_label where `label_value` = @JDBC_IDE;
+insert into linkis_ps_configuration_category (`label_id`, `level`) VALUES (@label_id, 2);
+
+select @label_id := id from linkis_cg_manager_label where `label_value` = @JDBC_NODE;
+insert into linkis_ps_configuration_category (`label_id`, `level`) VALUES (@label_id, 2);
+
+
+-- jdbc default configuration
+insert into `linkis_ps_configuration_config_value` (`config_key_id`, `config_value`, `config_label_id`)
+    (select `relation`.`config_key_id` AS `config_key_id`, '' AS `config_value`, `relation`.`engine_type_label_id` AS `config_label_id` FROM linkis_ps_configuration_key_engine_relation relation INNER JOIN linkis_cg_manager_label label ON relation.engine_type_label_id = label.id AND label.label_value = @JDBC_ALL);
+```
+
+If you want to reset the database configuration data of the engine, the reference files are as follows, please modify and use as needed:
+
+```sql
+-- Clear the initialization data of the jdbc engine
+SET @JDBC_LABEL="jdbc-4";
+
+SET @JDBC_ALL=CONCAT('*-*,',@JDBC_LABEL);
+SET @JDBC_IDE=CONCAT('*-IDE,',@JDBC_LABEL);
+SET @JDBC_NODE=CONCAT('*-nodeexecution,',@JDBC_LABEL);
+
+delete from `linkis_ps_configuration_config_value` where `config_label_id` in
+                                                           (select `relation`.`engine_type_label_id` AS `config_label_id` FROM `linkis_ps_configuration_key_engine_relation` relation INNER JOIN `linkis_cg_manager_label` label ON relation.engine_type_label_id = label.id AND label.label_value = @JDBC_ALL);
+
+
+delete from `linkis_ps_configuration_key_engine_relation`
+where `engine_type_label_id` in
+      (select label.id FROM `linkis_ps_configuration_config_key` config
+          INNER JOIN `linkis_cg_manager_label` label
+              ON config.engine_conn_type = 'jdbc' and label_value = @JDBC_ALL);
+
+
+delete from `linkis_ps_configuration_category`
+where `label_id` in (select id from `linkis_cg_manager_label` where `label_value` in(@JDBC_IDE, @JDBC_NODE));
+
+
+delete from `linkis_ps_configuration_config_key` where `engine_conn_type` = 'jdbc';
+
+delete from `linkis_cg_manager_label` where `label_value` in (@JDBC_ALL, @JDBC_IDE, @JDBC_NODE);
 
-``` 
-<dependency>
-<groupId>org.apache.linkis</groupId>
-<artifactId>linkis-computation-engineconn</artifactId>
-<version>${linkis.version}</version>
-</dependency>
 ```
-             
-As a subclass of "ComputationExecutor", "HiveEngineConnExecutor" implements the "executeLine" method. This method receives a line of execution statements. After calling the Hive interface for execution, it returns different "ExecuteResponse" to indicate success or failure. At the same time, in this method, through the interface provided in the "engineExecutorContext", the result set, log and progress transmission are realized. 
 
-The Hive engine only needs to execute the HQL Executor, which is a single executor engine. Therefore, when defining "HiveEngineConnFactory", it inherits "SingleExecutorEngineConnFactory" which implements the following two interfaces: 
-a) createEngineConn: creates a object that contains "UserGroupInformation", "SessionState" adn "HiveConf" as an encapsulation of the connection information with the underlying engine, set to the EngineConn object to return.
-b) createExecutor: creates a "HiveEngineConnExecutor" executor object based on the current engine connection information.
+Final effect:
+
+![JDBC引擎](/Images/EngineConnNew/jdbc_engine_conf_detail.png)
+
+After this configuration, when linkis-cli and Scripts submit the engine script, the tag information of the engine and the connection information of the data source can be correctly matched, and then the newly added engine can be pulled up.
+
+### 2.9 Added JDBC script type and icon information in DSS Scripts
+
+If you use the Scripts function of DSS, you also need to make some small changes to the front-end files of the web in the dss project. The purpose of the changes is to support creating, opening, and executing JDBC engine script types in Scripts, as well as implementing the corresponding engine. Icons, fonts, etc.
+
+#### 2.9.1 scriptis.js
+
+web/src/common/config/scriptis.js
+
+```js
+{
+  rule: /\.jdbc$/i,
+  lang: 'hql',
+  executable: true,
+  application: 'jdbc',
+  runType: 'jdbc',
+  ext: '.jdbc',
+  scriptType: 'jdbc',
+  abbr: 'jdbc',
+  logo: 'fi-jdbc',
+  color: '#444444',
+  isCanBeNew: true,
+  label: 'JDBC',
+  isCanBeOpen: true
+},
+```
+
+#### 2.9.2 Script copy support
+
+web/src/apps/scriptis/module/workSidebar/workSidebar.vue
+
+```js
+copyName() {
+  let typeArr = ['......', 'jdbc']
+}
+```
+
+#### 2.9.3 Logo and font color matching
+
+web/src/apps/scriptis/module/workbench/title.vue
+
+```js
+  data() {
+    return {
+      isHover: false,
+      iconColor: {
+        'fi-jdbc': '#444444',
+      },
+    };
+  },
+```
+
+web/src/apps/scriptis/module/workbench/modal.js
+
+```js
+let logoList = [
+  { rule: /\.jdbc$/i, logo: 'fi-jdbc' },
+];
+```
+
+web/src/components/tree/support.js
+
+```js
+export const supportTypes = [
+  // Probably useless here
+  { rule: /\.jdbc$/i, logo: 'fi-jdbc' },
+]
+```
+
+Engine icon display
+
+web/src/dss/module/resourceSimple/engine.vue
+
+```js
+methods: {
+  calssifyName(params) {
+     switch (params) {
+        case 'jdbc':
+          return 'JDBC';
+        ......
+     }
+  }
+  // 图标过滤
+  supportIcon(item) {
+     const supportTypes = [
+				......
+        { rule: 'jdbc', logo: 'fi-jdbc' },
+      ];
+  }
+}
+```
+
+web/src/dss/assets/projectIconFont/iconfont.css
+
+```css
+.fi-jdbc:before {
+  content: "\e75e";
+}
+```
+
+The control here should be:
+
+![引擎图标](/Images/EngineConnNew/jdbc_engine_logo.png)
+
+Find an svg file of the engine icon
+
+web/src/components/svgIcon/svg/fi-jdbc.svg
+
+If the new engine needs to contribute to the community in the future, the svg icons, fonts, etc. corresponding to the new engine need to confirm the open source agreement to which they belong, or obtain their copyright license.
+
+### 2.10 Workflow adaptation of DSS
+
+The final result:
+
+![工作流适配](/Images/EngineConnNew/jdbc_job_flow.png)
+
+Save the definition data of the newly added JDBC engine in the dss_workflow_node table, refer to SQL:
+
+```sql
+-- Engine task node basic information definition
+insert into `dss_workflow_node` (`id`, `name`, `appconn_name`, `node_type`, `jump_url`, `support_jump`, `submit_to_scheduler`, `enable_copy`, `should_creation_before_node`, `icon`) values('18','jdbc','-1','linkis.jdbc.jdbc',NULL,'1','1','1','0','svg文件');
+
+-- The svg file corresponds to the new engine task node icon
+
+-- Classification and division of engine task nodes
+insert  into `dss_workflow_node_to_group`(`node_id`,`group_id`) values (18, 2);
+
+-- Basic information (parameter attribute) binding of the engine task node
+INSERT  INTO `dss_workflow_node_to_ui`(`workflow_node_id`,`ui_id`) VALUES (18,45);
+
+-- The basic information related to the engine task node is defined in the dss_workflow_node_ui table, and then displayed in the form of a form on the right side of the above figure. You can expand other basic information for the new engine, and then it will be automatically rendered by the form on the right.
+```
+
+web/src/apps/workflows/service/nodeType.js
+
+```js
+import jdbc from '../module/process/images/newIcon/jdbc.svg';
+
+const NODETYPE = {
+  ......
+  JDBC: 'linkis.jdbc.jdbc',
+}
+
+const ext = {
+	......
+  [NODETYPE.JDBC]: 'jdbc',
+}
+
+const NODEICON = {
+  [NODETYPE.JDBC]: {
+    icon: jdbc,
+    class: {'jdbc': true}
+  },
+}
+```
+
+Add the icon of the new engine in the web/src/apps/workflows/module/process/images/newIcon/ directory
+
+web/src/apps/workflows/module/process/images/newIcon/jdbc
+
+Also when contributing to the community, please consider the lincese or copyright of the svg file.
 
-Hive engine is an ordinary Java process, so when implementing "EngineConnLaunchBuilder", it directly inherits "JavaProcessEngineConnLaunchBuilder". Like memory size, Java parameters and classPath, it can be adjusted through configuration, please refer to "EnvConfiguration" class for details.
+## 3. Chapter Summary
 
-Hive engine uses "LoadInstanceResource resources", so there is no need to implement "EngineResourceFactory", directly use the default "GenericEngineResourceFactory", adjust the number of resources through configuration, refer to "EngineConnPluginConf" class for details.
+The above content records the implementation process of the new engine, as well as some additional engine configurations that need to be done. At present, the expansion process of a new engine is still relatively cumbersome, and it is hoped that the expansion and installation of the new engine can be optimized in subsequent versions.
 
-Implement "HiveEngineConnPlugin" and provide methods for creating the above implementation classes.
 
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org