You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@linkis.apache.org by pe...@apache.org on 2021/10/28 12:07:29 UTC

[incubator-linkis-website] branch asf-staging updated (76ffb1f -> 535fad6)

This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a change to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git.


    from 76ffb1f  Merge pull request #8 from lucaszhu2zgf/asf-staging
     new bf2352f  bugfix for introduction
     new 4205780  Merge remote-tracking branch 'origin/asf-staging' into asf-staging
     new 535fad6  Merge pull request #9 from lucaszhu2zgf/asf-staging

The 50 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 assets/404.f24f37c0.js                                      |   1 -
 ...-manager-03.5aaff6ed.png => app-manager-01.5aaff6ed.png} | Bin
 assets/app_manager.bed25273.js                              |   2 +-
 assets/{download.4f121175.js => download.65cfe27b.js}       |   2 +-
 assets/{event.b677bf34.js => event.c4950b6a.js}             |   2 +-
 assets/index.83dab580.js                                    |   1 +
 assets/index.c319b82e.js                                    |   1 -
 assets/{index.ba4cbe23.js => index.dac2c111.js}             |   2 +-
 assets/{linkis.cdbb993f.js => linkis.513065ec.js}           |   2 +-
 assets/manager.6973d707.js                                  |   2 +-
 index.html                                                  |   2 +-
 11 files changed, 8 insertions(+), 9 deletions(-)
 delete mode 100644 assets/404.f24f37c0.js
 rename assets/{app-manager-03.5aaff6ed.png => app-manager-01.5aaff6ed.png} (100%)
 rename assets/{download.4f121175.js => download.65cfe27b.js} (95%)
 rename assets/{event.b677bf34.js => event.c4950b6a.js} (54%)
 create mode 100644 assets/index.83dab580.js
 delete mode 100644 assets/index.c319b82e.js
 rename assets/{index.ba4cbe23.js => index.dac2c111.js} (99%)
 rename assets/{linkis.cdbb993f.js => linkis.513065ec.js} (98%)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 08/50: UPDATE DETAIL

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit b712ce415f08edd6b7075a91d8024bd8f319f67a
Author: lucaszhu <lu...@webank.com>
AuthorDate: Thu Sep 30 14:55:38 2021 +0800

    UPDATE DETAIL
---
 README.md                                  | 2 ++
 src/pages/home.vue                         | 3 ++-
 src/style/base.less                        | 2 +-
 src/style/{virables.less => variable.less} | 0
 4 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/README.md b/README.md
index 388afb6..67fe38d 100644
--- a/README.md
+++ b/README.md
@@ -5,11 +5,13 @@ The project is specially for Linkis, based on the newest `vite` & `vue3`
 ## Local Development
 
 ```
+wnpm i
 npm run dev
 ```
 
 ## Publish
 
 ```
+wnpm i
 npm run build
 ```
\ No newline at end of file
diff --git a/src/pages/home.vue b/src/pages/home.vue
index 07b7ab3..8fb9032 100644
--- a/src/pages/home.vue
+++ b/src/pages/home.vue
@@ -67,11 +67,12 @@
   </div>
 </template>
 <style lang="less" scoped>
-  @import url('/src/style/virables.less');
+  @import url('/src/style/variable.less');
   @import url('/src/style/base.less');
 
   .home-page {
     .home-block-title{
+      margin-bottom: 20px;
       font-size: 32px;
       line-height: 46px;
     }
diff --git a/src/style/base.less b/src/style/base.less
index 7f9d360..c4926ca 100644
--- a/src/style/base.less
+++ b/src/style/base.less
@@ -1,4 +1,4 @@
-@import './virables.less';
+@import './variable.less';
 
 * {
   box-sizing: border-box;
diff --git a/src/style/virables.less b/src/style/variable.less
similarity index 100%
rename from src/style/virables.less
rename to src/style/variable.less

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 27/50: fix conflict

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 8bb62a6d90b24d43733382da6049c7add29953a9
Merge: e911adb 1a19cdf
Author: lucaszhu <lu...@webank.com>
AuthorDate: Wed Oct 13 16:41:27 2021 +0800

    fix conflict


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 22/50: FIX: 修复首页图片尺寸问题

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 7b1d497b6fda995bfe6f3b93bec53b19dd1f341a
Author: lucaszhu <lu...@webank.com>
AuthorDate: Wed Oct 13 10:37:02 2021 +0800

    FIX: 修复首页图片尺寸问题
---
 src/pages/home.vue | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/pages/home.vue b/src/pages/home.vue
index 91f2f46..88bb72b 100644
--- a/src/pages/home.vue
+++ b/src/pages/home.vue
@@ -182,6 +182,7 @@
       }
       .description-image{
         margin-left: 40px;
+        width: 630px;
       }
     }
 

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 46/50: Merge branch 'asf-staging' into asf-staging

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 9d11c8ee2c5139b973ebcc45420ae6ab3048ab46
Merge: 2305115 c59e3c1
Author: Casion <ca...@gmail.com>
AuthorDate: Thu Oct 28 19:43:34 2021 +0800

    Merge branch 'asf-staging' into asf-staging


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 49/50: Merge remote-tracking branch 'origin/asf-staging' into asf-staging

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 420578061d651f8e4f1b7cd2be62e066aeb0765d
Merge: bf2352f 9d11c8e
Author: casionone <ca...@gmail.com>
AuthorDate: Thu Oct 28 20:05:17 2021 +0800

    Merge remote-tracking branch 'origin/asf-staging' into asf-staging


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 18/50: Merge branch 'master' into add-docs

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit a9a199e29f79b83e13f8bcbcd3c50ebef6755282
Merge: c5f5a20 7ab959c
Author: lucaszhu <lu...@webank.com>
AuthorDate: Tue Oct 12 10:17:09 2021 +0800

    Merge branch 'master' into add-docs

 src/assets/docs/deploy/Linkis1.0_combined_eureka.png | Bin 0 -> 134418 bytes
 src/docs/deploy/linkis_zh.md                         |   2 +-
 2 files changed, 1 insertion(+), 1 deletion(-)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 13/50: ADD: 增加README的解析

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit c1aaacea568402382692aa74f0e2d8dde88d5d94
Author: lucaszhu <lu...@webank.com>
AuthorDate: Sat Oct 9 16:05:58 2021 +0800

    ADD: 增加README的解析
---
 package-lock.json  | 162 +++++++++++++++++++++++++++++++++
 package.json       |   4 +-
 src/docs/deploy.md | 256 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 src/main.js        |   1 +
 src/pages/docs.vue |  56 +++++++++++-
 vite.config.js     |  11 ++-
 6 files changed, 487 insertions(+), 3 deletions(-)

diff --git a/package-lock.json b/package-lock.json
index a56c21f..ac4a059 100644
--- a/package-lock.json
+++ b/package-lock.json
@@ -51,6 +51,16 @@
         "@intlify/shared": "9.2.0-beta.11"
       }
     },
+    "@rollup/pluginutils": {
+      "version": "4.1.1",
+      "resolved": "http://10.107.103.115:8001/@rollup/pluginutils/download/@rollup/pluginutils-4.1.1.tgz",
+      "integrity": "sha1-HU2obdTt7RVlalfZM/2iuaCNR+w=",
+      "dev": true,
+      "requires": {
+        "estree-walker": "^2.0.1",
+        "picomatch": "^2.2.2"
+      }
+    },
     "@vitejs/plugin-vue": {
       "version": "1.9.2",
       "resolved": "http://10.107.103.115:8001/@vitejs/plugin-vue/download/@vitejs/plugin-vue-1.9.2.tgz",
@@ -161,6 +171,15 @@
       "resolved": "http://10.107.103.115:8001/@vue/shared/download/@vue/shared-3.2.19.tgz",
       "integrity": "sha1-ER7D2hgzfYYnREaYTEmSWxsrLdc="
     },
+    "argparse": {
+      "version": "1.0.10",
+      "resolved": "http://10.107.103.115:8001/argparse/download/argparse-1.0.10.tgz",
+      "integrity": "sha1-vNZ5HqWuCXJeF+WtmIE0zUCz2RE=",
+      "dev": true,
+      "requires": {
+        "sprintf-js": "~1.0.2"
+      }
+    },
     "copy-anything": {
       "version": "2.0.3",
       "resolved": "http://10.107.103.115:8001/copy-anything/download/copy-anything-2.0.3.tgz",
@@ -201,6 +220,12 @@
       "integrity": "sha1-vmAtt8TceJRKnb3g0eoZ02wfiC0=",
       "dev": true
     },
+    "esprima": {
+      "version": "4.0.1",
+      "resolved": "http://10.107.103.115:8001/esprima/download/esprima-4.0.1.tgz",
+      "integrity": "sha1-E7BM2z5sXRnfkatph6hpVhmwqnE=",
+      "dev": true
+    },
     "estree-walker": {
       "version": "2.0.2",
       "resolved": "http://10.107.103.115:8001/estree-walker/download/estree-walker-2.0.2.tgz",
@@ -219,6 +244,11 @@
       "integrity": "sha1-pWiZ0+o8m6uHS7l3O3xe3pL0iV0=",
       "dev": true
     },
+    "github-markdown-css": {
+      "version": "4.0.0",
+      "resolved": "http://10.107.103.115:8001/github-markdown-css/download/github-markdown-css-4.0.0.tgz",
+      "integrity": "sha1-vp9Mr3o4kijUw2gzYmD/yQkGHzU="
+    },
     "graceful-fs": {
       "version": "4.2.8",
       "resolved": "http://10.107.103.115:8001/graceful-fs/download/graceful-fs-4.2.8.tgz",
@@ -226,6 +256,18 @@
       "dev": true,
       "optional": true
     },
+    "gray-matter": {
+      "version": "4.0.3",
+      "resolved": "http://10.107.103.115:8001/gray-matter/download/gray-matter-4.0.3.tgz",
+      "integrity": "sha1-6JPAZIJd5z6h9ffYjHqfcnQoh5g=",
+      "dev": true,
+      "requires": {
+        "js-yaml": "^3.13.1",
+        "kind-of": "^6.0.2",
+        "section-matter": "^1.0.0",
+        "strip-bom-string": "^1.0.0"
+      }
+    },
     "has": {
       "version": "1.0.3",
       "resolved": "http://10.107.103.115:8001/has/download/has-1.0.3.tgz",
@@ -261,12 +303,34 @@
         "has": "^1.0.3"
       }
     },
+    "is-extendable": {
+      "version": "0.1.1",
+      "resolved": "http://10.107.103.115:8001/is-extendable/download/is-extendable-0.1.1.tgz",
+      "integrity": "sha1-YrEQ4omkcUGOPsNqYX1HLjAd/Ik=",
+      "dev": true
+    },
     "is-what": {
       "version": "3.14.1",
       "resolved": "http://10.107.103.115:8001/is-what/download/is-what-3.14.1.tgz",
       "integrity": "sha1-4SIvRt3ahd6tD9HJ3xMXYOd3VcE=",
       "dev": true
     },
+    "js-yaml": {
+      "version": "3.14.1",
+      "resolved": "http://10.107.103.115:8001/js-yaml/download/js-yaml-3.14.1.tgz",
+      "integrity": "sha1-2ugS/bOCX6MGYJqHFzg8UMNqBTc=",
+      "dev": true,
+      "requires": {
+        "argparse": "^1.0.7",
+        "esprima": "^4.0.0"
+      }
+    },
+    "kind-of": {
+      "version": "6.0.3",
+      "resolved": "http://10.107.103.115:8001/kind-of/download/kind-of-6.0.3.tgz",
+      "integrity": "sha1-B8BQNKbDSfoG4k+jWqdttFgM5N0=",
+      "dev": true
+    },
     "less": {
       "version": "4.1.1",
       "resolved": "http://10.107.103.115:8001/less/download/less-4.1.1.tgz",
@@ -304,6 +368,48 @@
         "semver": "^5.6.0"
       }
     },
+    "markdown-it": {
+      "version": "12.2.0",
+      "resolved": "http://10.107.103.115:8001/markdown-it/download/markdown-it-12.2.0.tgz",
+      "integrity": "sha1-CR9yD9XbIG+A3nqNHxpwNf0NONs=",
+      "dev": true,
+      "requires": {
+        "argparse": "^2.0.1",
+        "entities": "~2.1.0",
+        "linkify-it": "^3.0.1",
+        "mdurl": "^1.0.1",
+        "uc.micro": "^1.0.5"
+      },
+      "dependencies": {
+        "argparse": {
+          "version": "2.0.1",
+          "resolved": "http://10.107.103.115:8001/argparse/download/argparse-2.0.1.tgz",
+          "integrity": "sha1-JG9Q88p4oyQPbJl+ipvR6sSeSzg=",
+          "dev": true
+        },
+        "entities": {
+          "version": "2.1.0",
+          "resolved": "http://10.107.103.115:8001/entities/download/entities-2.1.0.tgz",
+          "integrity": "sha1-mS0xKc999ocLlsV4WMJJoSD4uLU=",
+          "dev": true
+        },
+        "linkify-it": {
+          "version": "3.0.3",
+          "resolved": "http://10.107.103.115:8001/linkify-it/download/linkify-it-3.0.3.tgz",
+          "integrity": "sha1-qYuvRM5FpVDvtNScdp0HUkzC+i4=",
+          "dev": true,
+          "requires": {
+            "uc.micro": "^1.0.1"
+          }
+        }
+      }
+    },
+    "mdurl": {
+      "version": "1.0.1",
+      "resolved": "http://10.107.103.115:8001/mdurl/download/mdurl-1.0.1.tgz",
+      "integrity": "sha1-/oWy7HWlkDfyrf7BAP1sYBdhFS4=",
+      "dev": true
+    },
     "mime": {
       "version": "1.6.0",
       "resolved": "http://10.107.103.115:8001/mime/download/mime-1.6.0.tgz",
@@ -352,6 +458,12 @@
       "integrity": "sha1-+8EUtgykKzDZ2vWFjkvWi77bZzU=",
       "dev": true
     },
+    "picomatch": {
+      "version": "2.3.0",
+      "resolved": "http://10.107.103.115:8001/picomatch/download/picomatch-2.3.0.tgz",
+      "integrity": "sha1-8fBh3o9qS/AiiS4tEoI0+5gwKXI=",
+      "dev": true
+    },
     "pify": {
       "version": "4.0.1",
       "resolved": "http://10.107.103.115:8001/pify/download/pify-4.0.1.tgz",
@@ -409,6 +521,27 @@
       "dev": true,
       "optional": true
     },
+    "section-matter": {
+      "version": "1.0.0",
+      "resolved": "http://10.107.103.115:8001/section-matter/download/section-matter-1.0.0.tgz",
+      "integrity": "sha1-6QQZU1BngOwB1Z8pKhnHuFC4QWc=",
+      "dev": true,
+      "requires": {
+        "extend-shallow": "^2.0.1",
+        "kind-of": "^6.0.0"
+      },
+      "dependencies": {
+        "extend-shallow": {
+          "version": "2.0.1",
+          "resolved": "http://10.107.103.115:8001/extend-shallow/download/extend-shallow-2.0.1.tgz",
+          "integrity": "sha1-Ua99YUrZqfYQ6huvu5idaxxWiQ8=",
+          "dev": true,
+          "requires": {
+            "is-extendable": "^0.1.0"
+          }
+        }
+      }
+    },
     "semver": {
       "version": "5.7.1",
       "resolved": "http://10.107.103.115:8001/semver/download/semver-5.7.1.tgz",
@@ -431,12 +564,30 @@
       "resolved": "http://10.107.103.115:8001/sourcemap-codec/download/sourcemap-codec-1.4.8.tgz",
       "integrity": "sha1-6oBL2UhXQC5pktBaOO8a41qatMQ="
     },
+    "sprintf-js": {
+      "version": "1.0.3",
+      "resolved": "http://10.107.103.115:8001/sprintf-js/download/sprintf-js-1.0.3.tgz",
+      "integrity": "sha1-BOaSb2YolTVPPdAVIDYzuFcpfiw=",
+      "dev": true
+    },
+    "strip-bom-string": {
+      "version": "1.0.0",
+      "resolved": "http://10.107.103.115:8001/strip-bom-string/download/strip-bom-string-1.0.0.tgz",
+      "integrity": "sha1-5SEekiQ2n7uB1jOi8ABE3IztrZI=",
+      "dev": true
+    },
     "tslib": {
       "version": "1.14.1",
       "resolved": "http://10.107.103.115:8001/tslib/download/tslib-1.14.1.tgz",
       "integrity": "sha1-zy04vcNKE0vK8QkcQfZhni9nLQA=",
       "dev": true
     },
+    "uc.micro": {
+      "version": "1.0.6",
+      "resolved": "http://10.107.103.115:8001/uc.micro/download/uc.micro-1.0.6.tgz",
+      "integrity": "sha1-nEEagCpAmpH8bPdAgbq6NLJEmaw=",
+      "dev": true
+    },
     "vite": {
       "version": "2.5.10",
       "resolved": "http://10.107.103.115:8001/vite/download/vite-2.5.10.tgz",
@@ -450,6 +601,17 @@
         "rollup": "^2.38.5"
       }
     },
+    "vite-plugin-md": {
+      "version": "0.11.1",
+      "resolved": "http://10.107.103.115:8001/vite-plugin-md/download/vite-plugin-md-0.11.1.tgz",
+      "integrity": "sha1-vEBFXrVnZzlenM9G70pQ6MDSt0g=",
+      "dev": true,
+      "requires": {
+        "@rollup/pluginutils": "^4.1.1",
+        "gray-matter": "^4.0.3",
+        "markdown-it": "^12.2.0"
+      }
+    },
     "vue": {
       "version": "3.2.19",
       "resolved": "http://10.107.103.115:8001/vue/download/vue-3.2.19.tgz",
diff --git a/package.json b/package.json
index 1e0962d..fbf0877 100644
--- a/package.json
+++ b/package.json
@@ -7,6 +7,7 @@
     "serve": "vite preview"
   },
   "dependencies": {
+    "github-markdown-css": "^4.0.0",
     "vue": "^3.2.13",
     "vue-i18n": "^9.2.0-beta.11",
     "vue-router": "^4.0.11"
@@ -14,6 +15,7 @@
   "devDependencies": {
     "@vitejs/plugin-vue": "^1.9.0",
     "less": "^4.1.1",
-    "vite": "^2.5.10"
+    "vite": "^2.5.10",
+    "vite-plugin-md": "^0.11.1"
   }
 }
diff --git a/src/docs/deploy.md b/src/docs/deploy.md
new file mode 100644
index 0000000..523ac90
--- /dev/null
+++ b/src/docs/deploy.md
@@ -0,0 +1,256 @@
+## 注意事项
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**如果您是首次接触并使用Linkis,您可以忽略该章节;如果您已经是 Linkis 的使用用户,安装或升级前建议先阅读:[Linkis1.0 与 Linkis0.X 的区别简述](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Architecture_Documents/Linkis1.0%E4%B8%8ELinkis0.X%E7%9A%84%E5%8C%BA%E5%88%AB%E7%AE%80%E8%BF%B0.md)**。
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;请注意:除了 Linkis1.0 安装包默认已经包含的:Python/Shell/Hive/Spark四个EngineConnPlugin以外,如果大家有需要,可以手动安装如 JDBC 引擎等类型的其他引擎,具体请参考 [EngineConnPlugin引擎插件安装文档](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Deployment_Documents/EngineConnPlugin%E5%BC%95%E6%93%8E%E6%8F%92%E4%BB%B6%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3.md)。
+
+**Linkis Docker镜像**  
+[Linkis 0.10.0 Docker](https://hub.docker.com/repository/docker/wedatasphere/linkis)
+
+Linkis1.0 默认已适配的引擎列表如下:
+
+| 引擎类型 | 适配情况 | 官方安装包是否包含 |
+|---|---|---|
+| Python | 1.0已适配 | 包含 |
+| JDBC | 1.0已适配 | **不包含** |
+| Shell | 1.0已适配 | 包含 |
+| Hive | 1.0已适配 | 包含 |
+| Spark | 1.0已适配 | 包含 |
+| Pipeline | 1.0已适配 | **不包含** |
+| Presto | **1.0未适配** | **不包含** |
+| ElasticSearch | **1.0未适配** | **不包含** |
+| Impala | **1.0未适配** | **不包含** |
+| MLSQL | **1.0未适配** | **不包含** |
+| TiSpark | **1.0未适配** | **不包含** |
+
+## 一、确定您的安装环境
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;这里给出每个引擎的依赖信息列表:
+
+| 引擎类型 | 依赖环境 | 特殊说明 |
+|---|---|---|
+| Python| Python环境 | 日志和结果集如果配置hdfs://则依赖HDFS环境|
+| JDBC| 可以无依赖 | 日志和结果集路径如果配置hdfs://则依赖HDFS环境 |
+| Shell| 可以无依赖 | 日志和结果集路径如果配置hdfs://则依赖HDFS环境 |
+| Hive| 依赖Hadoop和Hive环境 |  |
+| Spark| 依赖Hadoop/Hive/Spark |  |
+
+**要求:安装Linkis需要至少3G内存。**
+
+默认每个微服务JVM堆内存为512M,可以通过修改`SERVER_HEAP_SIZE`来统一调整每个微服务的堆内存,如果您的服务器资源较少,我们建议修改该参数为128M。如下:
+
+```bash
+    vim ${LINKIS_HOME}/config/linkis-env.sh
+```
+
+```bash
+    # java application default jvm memory.
+    export SERVER_HEAP_SIZE="128M"
+```
+
+----
+
+## 二、Linkis环境准备
+
+### a. 基础软件安装
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;下面的软件必装:
+
+- MySQL (5.5+),[如何安装MySQL](https://www.runoob.com/mysql/mysql-install.html)
+- JDK (1.8.0_141以上),[如何安装JDK](https://www.runoob.com/java/java-environment-setup.html)
+
+ 
+### b. 创建用户
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;例如: **部署用户是hadoop账号**
+
+1. 在部署机器上创建部署用户,用于安装
+
+```bash
+    sudo useradd hadoop  
+```
+2. 因为Linkis的服务是以 sudo -u ${linux-user} 方式来切换引擎,从而执行作业,所以部署用户需要有 sudo 权限,而且是免密的。
+
+```bash
+    vi /etc/sudoers
+```
+
+```text
+    hadoop  ALL=(ALL)       NOPASSWD: NOPASSWD: ALL
+```
+
+3. **在每台安装节点设置如下的全局环境变量,以便Linkis能正常使用Hadoop、Hive和Spark**。
+  
+   修改安装用户的.bash_rc,命令如下:
+
+```bash     
+    vim /home/hadoop/.bash_rc ##以部署用户Hadoop为例
+```
+
+   下方为环境变量示例:
+
+```bash
+    #JDK
+    export JAVA_HOME=/nemo/jdk1.8.0_141
+
+    ##如果不使用Hive、Spark等引擎且不依赖Hadoop,则不需要修改以下环境变量
+    #HADOOP  
+    export HADOOP_HOME=/appcom/Install/hadoop
+    export HADOOP_CONF_DIR=/appcom/config/hadoop-config
+    #Hive
+    export HIVE_HOME=/appcom/Install/hive
+    export HIVE_CONF_DIR=/appcom/config/hive-config
+    #Spark
+    export SPARK_HOME=/appcom/Install/spark
+    export SPARK_CONF_DIR=/appcom/config/spark-config/
+    export PYSPARK_ALLOW_INSECURE_GATEWAY=1  # Pyspark必须加的参数
+```
+
+4. **如果您的Pyspark和Python想拥有画图功能,则还需在所有安装节点,安装画图模块**。命令如下:
+
+```bash
+    python -m pip install matplotlib
+```
+
+### c. 安装包准备
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;从Linkis已发布的release中([点击这里进入下载页面](https://github.com/WeBankFinTech/Linkis/releases)),下载最新安装包。
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;先解压安装包到安装目录,并对解压后的文件进行配置修改。
+
+```bash   
+    tar -xvf  wedatasphere-linkis-x.x.x-combined-package-dist.tar.gz
+```
+      
+### d. 不依赖HDFS的基础配置修改
+
+```bash
+    vi config/linkis-env.sh
+```
+        
+```properties
+
+    #SSH_PORT=22        #指定SSH端口,如果单机版本安装可以不配置
+    deployUser=hadoop      #指定部署用户
+    LINKIS_INSTALL_HOME=/appcom/Install/Linkis    # 指定安装目录
+    WORKSPACE_USER_ROOT_PATH=file:///tmp/hadoop    # 指定用户根目录,一般用于存储用户的脚本文件和日志文件等,是用户的工作空间。
+    RESULT_SET_ROOT_PATH=file:///tmp/linkis   # 结果集文件路径,用于存储Job的结果集文件
+    ENGINECONN_ROOT_PATH=/appcom/tmp #存放ECP的安装路径,需要部署用户有写权限的本地目录
+    ENTRANCE_CONFIG_LOG_PATH=file:///tmp/linkis/  #ENTRANCE的日志路径
+    ## LDAP配置,默认Linkis只支持部署用户登录,如果需要支持多用户登录可以使用LDAP,需要配置以下参数:
+    #LDAP_URL=ldap://localhost:1389/ 
+    #LDAP_BASEDN=dc=webank,dc=com
+```
+### e. 依赖HDFS/Hive/Spark的基础配置修改
+
+```bash
+     vi config/linkis-env.sh
+```
+        
+```properties
+    SSH_PORT=22        #指定SSH端口,如果单机版本安装可以不配置
+    deployUser=hadoop      #指定部署用户
+    WORKSPACE_USER_ROOT_PATH=file:///tmp/hadoop    # 指定用户根目录,一般用于存储用户的脚本文件和日志文件等,是用户的工作空间。
+    RESULT_SET_ROOT_PATH=hdfs:///tmp/linkis   # 结果集文件路径,用于存储Job的结果集文件
+    ENGINECONN_ROOT_PATH=/appcom/tmp #存放ECP的安装路径,需要部署用户有写权限的本地目录
+    ENTRANCE_CONFIG_LOG_PATH=hdfs:///tmp/linkis/  #ENTRANCE的日志路径
+
+    #因为1.0支持多Yarn集群,使用到Yarn队列资源的一定需要配置YARN_RESTFUL_URL
+    YARN_RESTFUL_URL=http://127.0.0.1:8088  #Yarn的ResourceManager的地址
+
+    # 如果您想配合Scriptis一起使用,CDH版的Hive,还需要配置如下参数(社区版Hive可忽略该配置)
+    HIVE_META_URL=jdbc://...   # HiveMeta元数据库的URL
+    HIVE_META_USER=   # HiveMeta元数据库的用户
+    HIVE_META_PASSWORD=    # HiveMeta元数据库的密码
+    
+    # 配置hadoop/hive/spark的配置目录 
+    HADOOP_CONF_DIR=/appcom/config/hadoop-config  #hadoop的conf目录
+    HIVE_CONF_DIR=/appcom/config/hive-config   #hive的conf目录
+    SPARK_CONF_DIR=/appcom/config/spark-config #spark的conf目录
+
+    ## LDAP配置,默认Linkis只支持部署用户登录,如果需要支持多用户登录可以使用LDAP,需要配置以下参数:
+    #LDAP_URL=ldap://localhost:1389/ 
+    #LDAP_BASEDN=dc=webank,dc=com
+    
+    ##如果spark不是2.4.3的版本需要修改参数:
+    #SPARK_VERSION=3.1.1
+
+    ##如果hive不是1.2.1的版本需要修改参数:
+    #HIVE_VERSION=2.3.3
+```
+
+### f. 修改数据库配置 
+
+```bash   
+    vi config/db.sh 
+```
+            
+```properties    
+
+    # 设置数据库的连接信息
+    # 包括IP地址、数据库名称、用户名、端口
+    # 主要用于存储用户的自定义变量、配置参数、UDF和小函数,以及提供JobHistory的底层存储
+    MYSQL_HOST=
+    MYSQL_PORT=
+    MYSQL_DB=
+    MYSQL_USER=
+    MYSQL_PASSWORD=
+ ```
+ 
+## 三、安装和启动
+
+### 1. 执行安装脚本:
+
+```bash
+    sh bin/install.sh
+```
+
+### 2. 安装步骤
+
+- install.sh脚本会询问您是否需要初始化数据库并导入元数据。
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;因为担心用户重复执行install.sh脚本,把数据库中的用户数据清空,所以在install.sh执行时,会询问用户是否需要初始化数据库并导入元数据。
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**第一次安装**必须选是。
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**请注意:如果您是升级已有环境的 Linkis0.X 到 Linkis1.0,请不要直接选是,请先参考 [Linkis1.0升级指南](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Upgrade_Documents/Linkis%E4%BB%8E0.X%E5%8D%87%E7%BA%A7%E5%88%B01.0%E6%8C%87%E5%8D%97.md)**。
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**请注意:如果您是升级已有环境的 Linkis0.X 到 Linkis1.0,请不要直接选是,请先参考 [Linkis1.0升级指南](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Upgrade_Documents/Linkis%E4%BB%8E0.X%E5%8D%87%E7%BA%A7%E5%88%B01.0%E6%8C%87%E5%8D%97.md)**。
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**请注意:如果您是升级已有环境的 Linkis0.X 到 Linkis1.0,请不要直接选是,请先参考 [Linkis1.0升级指南](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Upgrade_Documents/Linkis%E4%BB%8E0.X%E5%8D%87%E7%BA%A7%E5%88%B01.0%E6%8C%87%E5%8D%97.md)**。
+
+### 3. 是否安装成功:
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;通过查看控制台打印的日志信息查看是否安装成功。
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;如果有错误信息,可以查看具体报错原因。
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;您也可以通过查看我们的[常见问题](https://docs.qq.com/doc/DSGZhdnpMV3lTUUxq),获取问题的解答。
+
+### 4. 快速启动Linkis
+
+#### (1)、启动服务:
+  
+  在安装目录执行以下命令,启动所有服务:    
+
+```bash  
+  sh sbin/linkis-start-all.sh
+```
+        
+#### (2)、查看是否启动成功
+    
+  可以在Eureka界面查看服务启动成功情况,查看方法:
+    
+  使用http://${EUREKA_INSTALL_IP}:${EUREKA_PORT}, 在浏览器中打开,查看服务是否注册成功。
+    
+  如果您没有在config.sh指定EUREKA_INSTALL_IP和EUREKA_INSTALL_IP,则HTTP地址为:http://127.0.0.1:20303
+    
+  如下图,如您的Eureka主页出现以下微服务,则表示服务都启动成功,可以正常对外提供服务了:
+
+  默认会启动8个Linkis微服务,其中图下linkis-cg-engineconn服务为运行任务才会启动
+   
+![Linkis1.0_Eureka](../Images/deployment/Linkis1.0_combined_eureka.png)
+
+#### (3)、查看服务是否正常
+1. 服务启动成功后您可以通过,安装前端管理台,来检验服务的正常性,[点击跳转管理台安装文档](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Deployment_Documents/%E5%89%8D%E7%AB%AF%E7%AE%A1%E7%90%86%E5%8F%B0%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3.md)
+2. 您也可以通过Linkis用户手册来测试Linis是否能正常运行任务,[点击跳转用户手册](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/User_Manual/README.md)
diff --git a/src/main.js b/src/main.js
index 1772dd3..7b7b5c3 100644
--- a/src/main.js
+++ b/src/main.js
@@ -3,6 +3,7 @@ import { createRouter, createWebHashHistory } from 'vue-router'
 import routes from './router';
 import App from './App.vue';
 import i18n from './i18n';
+import 'github-markdown-css';
 
 const router = createRouter({
   history: createWebHashHistory(),
diff --git a/src/pages/docs.vue b/src/pages/docs.vue
index b33becc..c720d2d 100644
--- a/src/pages/docs.vue
+++ b/src/pages/docs.vue
@@ -1,3 +1,57 @@
 <template>
-  <div>docs</div>
+  <div class="ctn-block reading-area">
+    <main class="main-content">
+      <deploy></deploy>
+    </main>
+    <div class="side-bar">
+      <a :href="'#/blog' + doc.anchor" class="bar-item" v-for="(doc,index) in docs" :key="index">{{doc.title}}
+        <a :href="'#/blog' + children.anchor" class="bar-item" v-for="(children,cindex) in doc.children" :key="cindex">{{children.title}}
+        </a>
+      </a>
+    </div>
+  </div>
 </template>
+<style lang="less" scoped>
+  .reading-area {
+    display: flex;
+    padding: 60px 0;
+
+    .main-content {
+      width: 900px;
+      padding: 30px;
+      min-height: 600px;
+    }
+
+    .side-bar {
+      flex: 1;
+      padding: 18px 0;
+      border-left: 1px solid #eaecef;
+
+      .bar-item {
+        display: block;
+        padding: 5px 18px;
+        color: #4A4A4A;
+      }
+    }
+  }
+</style>
+<script setup>
+import deploy from '../docs/deploy.md';
+  const docs = [{
+    title: '部署文档',
+    anchor: 'deploy',
+    children: [{
+      title: '快速部署 Linkis1.0',
+      anchor: 'deploy-linkis'
+    }, {
+      title: '快速安装 EngineConnPlugin 引擎插件',
+      anchor: 'deploy-engine'
+    }, {
+      title: 'Linkis1.0 分布式部署手册',
+      anchor: 'deploy-handbook'
+    }, {
+      title: 'Linkis1.0 安装包目录层级结构详解',
+      anchor: 'deploy-detail'
+    }]
+  }]
+</script>
\ No newline at end of file
diff --git a/vite.config.js b/vite.config.js
index 315212d..44b54d0 100644
--- a/vite.config.js
+++ b/vite.config.js
@@ -1,7 +1,16 @@
 import { defineConfig } from 'vite'
 import vue from '@vitejs/plugin-vue'
+import Markdown from 'vite-plugin-md'
 
 // https://vitejs.dev/config/
 export default defineConfig({
-  plugins: [vue()]
+  plugins: [
+    vue({include: [/\.vue$/, /\.md$/]}),
+    Markdown({
+      markdownItOptions: {
+        html: true,
+        linkify: true
+      }
+    })
+  ]
 })

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 41/50: Merge pull request #1 from casionone/asf-staging

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 529ece431e80cdd6b71a03a25d111075215e672f
Merge: 039f325 189eb6c
Author: johnnywang <wp...@gmail.com>
AuthorDate: Thu Oct 21 14:40:35 2021 +0800

    Merge pull request #1 from casionone/asf-staging
    
    init for asf-staging

 .vscode/extensions.json                            |   3 -
 Linkis-Doc-master/LANGS.md                         |   2 -
 Linkis-Doc-master/README.md                        | 114 ----
 Linkis-Doc-master/README_CN.md                     | 105 ----
 .../en_US/API_Documentations/JDBC_API_Document.md  |  45 --
 ...sk_submission_and_execution_RestAPI_document.md | 170 ------
 .../en_US/API_Documentations/Login_API.md          | 125 ----
 .../en_US/API_Documentations/README.md             |   8 -
 .../EngineConn/README.md                           |  99 ----
 .../EngineConnManager/Images/ECM-01.png            | Bin 34340 -> 0 bytes
 .../EngineConnManager/Images/ECM-02.png            | Bin 25340 -> 0 bytes
 .../EngineConnManager/README.md                    |  45 --
 .../EngineConnPlugin/README.md                     |  68 ---
 .../LinkisManager/AppManager.md                    |  33 --
 .../LinkisManager/LabelManager.md                  |  38 --
 .../LinkisManager/README.md                        |  41 --
 .../LinkisManager/ResourceManager.md               | 132 -----
 .../Computation_Governance_Services/README.md      |  40 --
 .../DifferenceBetween1.0&0.x.md                    |  50 --
 .../How_to_add_an_EngineConn.md                    | 105 ----
 ...submission_preparation_and_execution_process.md | 138 -----
 .../Microservice_Governance_Services/Gateway.md    |  34 --
 .../Microservice_Governance_Services/README.md     |  32 -
 .../Public_Enhancement_Services/BML.md             |  93 ---
 .../ContextService/ContextService_Cache.md         |  95 ---
 .../ContextService/ContextService_Client.md        |  61 --
 .../ContextService/ContextService_HighAvailable.md |  86 ---
 .../ContextService/ContextService_Listener.md      |  33 --
 .../ContextService/ContextService_Persistence.md   |   8 -
 .../ContextService/ContextService_Search.md        | 127 ----
 .../ContextService/ContextService_Service.md       |  53 --
 .../ContextService/README.md                       | 123 ----
 .../Public_Enhancement_Services/PublicService.md   |  34 --
 .../Public_Enhancement_Services/README.md          |  91 ---
 .../en_US/Architecture_Documents/README.md         |  18 -
 .../Deployment_Documents/Cluster_Deployment.md     |  98 ----
 .../EngineConnPlugin_installation_document.md      |  82 ---
 ...75\262\345\276\256\346\234\215\345\212\241.png" | Bin 130148 -> 0 bytes
 .../Installation_Hierarchical_Structure.md         | 198 -------
 .../Deployment_Documents/Quick_Deploy_Linkis1.0.md | 246 --------
 .../en_US/Development_Documents/Contributing.md    | 195 -------
 .../Development_Specification/API.md               | 143 -----
 .../Development_Specification/Concurrent.md        |  17 -
 .../Development_Specification/Exception_Catch.md   |   9 -
 .../Development_Specification/Exception_Throws.md  |  52 --
 .../Development_Specification/Log.md               |  13 -
 .../Development_Specification/Path_Usage.md        |  15 -
 .../Development_Specification/README.md            |   9 -
 .../Linkis_Compilation_Document.md                 | 135 -----
 .../Linkis_Compile_and_Package.md                  | 155 -----
 .../en_US/Development_Documents/Linkis_DEBUG.md    | 141 -----
 .../New_EngineConn_Development.md                  |  77 ---
 .../Hive_User_Manual.md                            |  81 ---
 .../JDBC_User_Manual.md                            |  53 --
 .../Python_User_Manual.md                          |  61 --
 .../en_US/Engine_Usage_Documentations/README.md    |  25 -
 .../Shell_User_Manual.md                           |  55 --
 .../Spark_User_Manual.md                           |  91 ---
 .../add_an_EngineConn_flow_chart.png               | Bin 59893 -> 0 bytes
 .../Architecture/EngineConn/engineconn-01.png      | Bin 157753 -> 0 bytes
 .../Gateway/gateway_server_dispatcher.png          | Bin 83743 -> 0 bytes
 .../Architecture/Gateway/gateway_server_global.png | Bin 85272 -> 0 bytes
 .../Architecture/Gateway/gatway_websocket.png      | Bin 37769 -> 0 bytes
 .../execution.png                                  | Bin 31078 -> 0 bytes
 .../orchestrate.png                                | Bin 31095 -> 0 bytes
 .../overall.png                                    | Bin 231192 -> 0 bytes
 .../physical_tree.png                              | Bin 79471 -> 0 bytes
 .../result_acquisition.png                         | Bin 41007 -> 0 bytes
 .../submission.png                                 | Bin 12946 -> 0 bytes
 .../LabelManager/label_manager_builder.png         | Bin 62978 -> 0 bytes
 .../LabelManager/label_manager_global.png          | Bin 14988 -> 0 bytes
 .../LabelManager/label_manager_scorer.png          | Bin 72977 -> 0 bytes
 .../Linkis0.X-NewEngine-architecture.png           | Bin 244826 -> 0 bytes
 .../Architecture/Linkis0.X-services-list.png       | Bin 66821 -> 0 bytes
 .../Linkis1.0-EngineConn-architecture.png          | Bin 157753 -> 0 bytes
 .../Linkis1.0-NewEngine-architecture.png           | Bin 26523 -> 0 bytes
 .../Images/Architecture/Linkis1.0-architecture.png | Bin 212362 -> 0 bytes
 .../Linkis1.0-newEngine-initialization.png         | Bin 48313 -> 0 bytes
 .../Architecture/Linkis1.0-services-list.png       | Bin 85890 -> 0 bytes
 .../Architecture/PublicEnhencementArchitecture.png | Bin 47158 -> 0 bytes
 .../ContextService/linkis-contextservice-01.png    | Bin 22692 -> 0 bytes
 .../ContextService/linkis-contextservice-02.png    | Bin 10655 -> 0 bytes
 .../linkis-contextservice-cache-01.png             | Bin 11881 -> 0 bytes
 .../linkis-contextservice-cache-02.png             | Bin 23902 -> 0 bytes
 .../linkis-contextservice-cache-03.png             | Bin 109334 -> 0 bytes
 .../linkis-contextservice-cache-04.png             | Bin 36161 -> 0 bytes
 .../linkis-contextservice-cache-05.png             | Bin 2265 -> 0 bytes
 .../linkis-contextservice-client-01.png            | Bin 54438 -> 0 bytes
 .../linkis-contextservice-client-02.png            | Bin 93036 -> 0 bytes
 .../linkis-contextservice-client-03.png            | Bin 34839 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-01.png | Bin 38439 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-02.png | Bin 21982 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-03.png | Bin 91788 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-04.png | Bin 40733 -> 0 bytes
 .../linkis-contextservice-listener-01.png          | Bin 24414 -> 0 bytes
 .../linkis-contextservice-listener-02.png          | Bin 46152 -> 0 bytes
 .../linkis-contextservice-listener-03.png          | Bin 32597 -> 0 bytes
 .../linkis-contextservice-persistence-01.png       | Bin 198797 -> 0 bytes
 .../linkis-contextservice-search-01.png            | Bin 33731 -> 0 bytes
 .../linkis-contextservice-search-02.png            | Bin 26768 -> 0 bytes
 .../linkis-contextservice-search-03.png            | Bin 33312 -> 0 bytes
 .../linkis-contextservice-search-04.png            | Bin 25192 -> 0 bytes
 .../linkis-contextservice-search-05.png            | Bin 24757 -> 0 bytes
 .../linkis-contextservice-search-06.png            | Bin 29923 -> 0 bytes
 .../linkis-contextservice-search-07.png            | Bin 30013 -> 0 bytes
 .../linkis-contextservice-service-01.png           | Bin 56235 -> 0 bytes
 .../linkis-contextservice-service-02.png           | Bin 73463 -> 0 bytes
 .../linkis-contextservice-service-03.png           | Bin 23477 -> 0 bytes
 .../linkis-contextservice-service-04.png           | Bin 27387 -> 0 bytes
 .../en_US/Images/Architecture/bml-02.png           | Bin 55227 -> 0 bytes
 .../Architecture/linkis-engineConnPlugin-01.png    | Bin 21864 -> 0 bytes
 .../en_US/Images/Architecture/linkis-intro-01.png  | Bin 413878 -> 0 bytes
 .../en_US/Images/Architecture/linkis-intro-02.png  | Bin 355186 -> 0 bytes
 .../Architecture/linkis-microservice-gov-01.png    | Bin 109909 -> 0 bytes
 .../Architecture/linkis-microservice-gov-03.png    | Bin 83457 -> 0 bytes
 .../Architecture/linkis-publicService-01.png       | Bin 62443 -> 0 bytes
 .../en_US/Images/EngineUsage/hive-config.png       | Bin 86864 -> 0 bytes
 .../en_US/Images/EngineUsage/hive-run.png          | Bin 94294 -> 0 bytes
 .../en_US/Images/EngineUsage/jdbc-conf.png         | Bin 91609 -> 0 bytes
 .../en_US/Images/EngineUsage/jdbc-run.png          | Bin 56438 -> 0 bytes
 .../en_US/Images/EngineUsage/pyspakr-run.png       | Bin 124979 -> 0 bytes
 .../en_US/Images/EngineUsage/python-config.png     | Bin 92997 -> 0 bytes
 .../en_US/Images/EngineUsage/python-run.png        | Bin 89641 -> 0 bytes
 .../en_US/Images/EngineUsage/queue-set.png         | Bin 93935 -> 0 bytes
 .../en_US/Images/EngineUsage/scala-run.png         | Bin 125060 -> 0 bytes
 .../en_US/Images/EngineUsage/shell-run.png         | Bin 209553 -> 0 bytes
 .../en_US/Images/EngineUsage/spark-conf.png        | Bin 99930 -> 0 bytes
 .../en_US/Images/EngineUsage/sparksql-run.png      | Bin 121699 -> 0 bytes
 .../en_US/Images/EngineUsage/workflow.png          | Bin 151481 -> 0 bytes
 .../en_US/Images/Linkis_1.0_architecture.png       | Bin 316746 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/Q&A.png      | Bin 161638 -> 0 bytes
 .../Tuning_and_Troubleshooting/code-fix-01.png     | Bin 199523 -> 0 bytes
 .../Tuning_and_Troubleshooting/db-config-01.png    | Bin 391789 -> 0 bytes
 .../Tuning_and_Troubleshooting/db-config-02.png    | Bin 60334 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-01.png | Bin 6168 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-02.png | Bin 62496 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-03.png | Bin 32875 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-04.png | Bin 111758 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-05.png | Bin 52040 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-06.png | Bin 63668 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-07.png | Bin 316176 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-08.png | Bin 27722 -> 0 bytes
 .../Tuning_and_Troubleshooting/hive-config-01.png  | Bin 76327 -> 0 bytes
 .../linkis-exception-01.png                        | Bin 1199628 -> 0 bytes
 .../linkis-exception-02.png                        | Bin 1366293 -> 0 bytes
 .../linkis-exception-03.png                        | Bin 646836 -> 0 bytes
 .../linkis-exception-04.png                        | Bin 2965676 -> 0 bytes
 .../linkis-exception-05.png                        | Bin 454949 -> 0 bytes
 .../linkis-exception-06.png                        | Bin 869492 -> 0 bytes
 .../linkis-exception-07.png                        | Bin 2249882 -> 0 bytes
 .../linkis-exception-08.png                        | Bin 1191728 -> 0 bytes
 .../linkis-exception-09.png                        | Bin 1008341 -> 0 bytes
 .../linkis-exception-10.png                        | Bin 322110 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-01.png    | Bin 115010 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-02.png    | Bin 576911 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-03.png    | Bin 654609 -> 0 bytes
 .../searching_keywords.png                         | Bin 102094 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-01.png  | Bin 74682 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-02.png  | Bin 330735 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-03.png  | Bin 1624375 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-04.png  | Bin 803920 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-05.png  | Bin 179543 -> 0 bytes
 .../Tunning_And_Troubleshooting/debug-01.png       | Bin 6168 -> 0 bytes
 .../Tunning_And_Troubleshooting/debug-02.png       | Bin 62496 -> 0 bytes
 .../Tunning_And_Troubleshooting/debug-03.png       | Bin 32875 -> 0 bytes
 .../Tunning_And_Troubleshooting/debug-04.png       | Bin 111758 -> 0 bytes
 .../Tunning_And_Troubleshooting/debug-05.png       | Bin 52040 -> 0 bytes
 .../Tunning_And_Troubleshooting/debug-06.png       | Bin 63668 -> 0 bytes
 .../Tunning_And_Troubleshooting/debug-07.png       | Bin 316176 -> 0 bytes
 .../Tunning_And_Troubleshooting/debug-08.png       | Bin 27722 -> 0 bytes
 .../deployment/Linkis1.0_combined_eureka.png       | Bin 134418 -> 0 bytes
 .../en_US/Images/wedatasphere_contact_01.png       | Bin 217762 -> 0 bytes
 .../en_US/Images/wedatasphere_stack_Linkis.png     | Bin 203466 -> 0 bytes
 .../Tuning_and_Troubleshooting/Configuration.md    | 217 -------
 .../en_US/Tuning_and_Troubleshooting/Q&A.md        | 255 --------
 .../en_US/Tuning_and_Troubleshooting/README.md     |  98 ----
 .../en_US/Tuning_and_Troubleshooting/Tuning.md     |  61 --
 .../Linkis_Upgrade_from_0.x_to_1.0_guide.md        |  73 ---
 .../en_US/Upgrade_Documents/README.md              |   5 -
 .../en_US/User_Manual/How_To_Use_Linkis.md         |  29 -
 .../en_US/User_Manual/Linkis1.0_User_Manual.md     | 400 -------------
 .../en_US/User_Manual/LinkisCli_Usage_document.md  | 191 ------
 .../User_Manual/Linkis_Console_User_Manual.md      | 120 ----
 Linkis-Doc-master/en_US/User_Manual/README.md      |   8 -
 ...\350\241\214RestAPI\346\226\207\346\241\243.md" | 171 ------
 .../zh_CN/API_Documentations/Login_API.md          | 131 -----
 .../zh_CN/API_Documentations/README.md             |   8 -
 ...350\241\214JDBC_API\346\226\207\346\241\243.md" |  46 --
 .../Commons/messagescheduler.md                    |  15 -
 .../zh_CN/Architecture_Documents/Commons/rpc.md    |  17 -
 .../EngineConn/README.md                           |  98 ----
 .../ECM\346\236\266\346\236\204\345\233\276.png"   | Bin 34340 -> 0 bytes
 ...57\267\346\261\202\346\265\201\347\250\213.png" | Bin 25340 -> 0 bytes
 .../EngineConnManager/README.md                    |  49 --
 .../EngineConnPlugin/README.md                     |  71 ---
 .../Entrance/Entrance.md                           |  26 -
 .../LinkisClient/README.md                         |  35 --
 .../LinkisManager/AppManager.md                    |  45 --
 .../LinkisManager/LabelManager.md                  |  40 --
 .../LinkisManager/README.md                        |  74 ---
 .../LinkisManager/ResourceManager.md               | 145 -----
 .../Computation_Governance_Services/README.md      |  66 ---
 ...226\260\345\242\236\346\265\201\347\250\213.md" | 111 ----
 ...211\247\350\241\214\346\265\201\347\250\213.md" | 165 ------
 ...214\272\345\210\253\347\256\200\350\277\260.md" |  98 ----
 .../Microservice_Governance_Services/Gateway.md    |  30 -
 .../Microservice_Governance_Services/README.md     |  23 -
 .../Computation_Orchestrator_architecture.md       |  18 -
 ...16\245\345\217\243\345\222\214\347\261\273.png" | Bin 27266 -> 0 bytes
 ...72\244\344\272\222\346\265\201\347\250\213.png" | Bin 30134 -> 0 bytes
 ...16\245\345\217\243\345\222\214\347\261\273.png" | Bin 162100 -> 0 bytes
 .../Orchestrator/Orchestrator_CheckRuler.md        |  27 -
 .../Orchestrator/Orchestrator_ECMP_architecture.md |  32 -
 .../Orchestrator_Execution_architecture_doc.md     |  19 -
 .../Orchestrator_Operation_architecture_doc.md     |  26 -
 .../Orchestrator_Reheater_architecture.md          |  12 -
 .../Orchestrator_Transform_architecture.md         |  12 -
 .../Orchestrator/Orchestrator_architecture_doc.md  | 113 ----
 .../Architecture_Documents/Orchestrator/README.md  |  55 --
 .../Public_Enhancement_Services/BML.md             |  94 ---
 .../ContextService/ContextService_Cache.md         |  95 ---
 .../ContextService/ContextService_Client.md        |  61 --
 .../ContextService/ContextService_HighAvailable.md |  86 ---
 .../ContextService/ContextService_Listener.md      |  33 --
 .../ContextService/ContextService_Persistence.md   |   8 -
 .../ContextService/ContextService_Search.md        | 127 ----
 .../ContextService/ContextService_Service.md       |  55 --
 .../ContextService/README.md                       | 124 ----
 .../Public_Enhancement_Services/DataSource.md      |   1 -
 .../Public_Enhancement_Services/PublicService.md   |  31 -
 .../Public_Enhancement_Services/README.md          |  91 ---
 .../zh_CN/Architecture_Documents/README.md         |  24 -
 .../Deployment_Documents/Cluster_Deployment.md     | 100 ----
 ...256\211\350\243\205\346\226\207\346\241\243.md" | 106 ----
 ...75\262\345\276\256\346\234\215\345\212\241.png" | Bin 130148 -> 0 bytes
 .../Installation_Hierarchical_Structure.md         | 186 ------
 .../zh_CN/Deployment_Documents/README.md           |   1 -
 ...256\211\350\243\205\346\226\207\346\241\243.md" | 110 ----
 ...51\200\237\351\203\250\347\275\262Linkis1.0.md" | 256 --------
 .../zh_CN/Development_Documents/Contributing.md    | 206 -------
 .../zh_CN/Development_Documents/DEBUG_LINKIS.md    | 113 ----
 .../Development_Specification/API.md               |  72 ---
 .../Development_Specification/Concurrent.md        |   9 -
 .../Development_Specification/Exception_Catch.md   |   9 -
 .../Development_Specification/Exception_Throws.md  |  30 -
 .../Development_Specification/Log.md               |  13 -
 .../Development_Specification/Path_Usage.md        |   8 -
 .../Development_Specification/README.md            |  12 -
 ...274\226\350\257\221\346\226\207\346\241\243.md" | 160 -----
 .../New_EngineConn_Development.md                  |  79 ---
 .../zh_CN/Development_Documents/README.md          |   1 -
 .../zh_CN/Development_Documents/Web/Build.md       |  84 ---
 .../zh_CN/Development_MEETUP/Phase_One/README.md   |  56 --
 .../zh_CN/Development_MEETUP/Phase_One/chapter1.md |   1 -
 .../zh_CN/Development_MEETUP/Phase_One/chapter2.md |   1 -
 .../Development_MEETUP/Phase_Two/Images/Q&A.png    | Bin 161638 -> 0 bytes
 .../Development_MEETUP/Phase_Two/Images/issue.png  | Bin 102094 -> 0 bytes
 .../Phase_Two/Images/\345\217\214\346\264\273.png" | Bin 130148 -> 0 bytes
 .../Images2/0ca28635de253f245743fbf0a7cfe165.png   | Bin 98316 -> 0 bytes
 .../Images2/146a58addcacbc560a33604b00636dee.png   | Bin 44890 -> 0 bytes
 .../Images2/1730acb1c4ff58a055fa71324e5c7f2c.png   | Bin 95491 -> 0 bytes
 .../Images2/1d31b398318acbd862f20ac05decbce9.png   | Bin 7741 -> 0 bytes
 .../Images2/1d8f043dae5afdf07371ad31b06bad6e.png   | Bin 74243 -> 0 bytes
 .../Images2/232983a712a949196159f0aeab7de7f5.png   | Bin 150575 -> 0 bytes
 .../Images2/2767bac623d10bf45033cf9fdd8d197f.png   | Bin 120905 -> 0 bytes
 .../Images2/335dabbf46b5af11e494cdd1be2c32a1.png   | Bin 118394 -> 0 bytes
 .../Images2/491e9a0fbd5b0121f228e0f7938cf168.png   | Bin 120419 -> 0 bytes
 .../Images2/781914abed8ec4955cac520eb0a1be7e.png   | Bin 770399 -> 0 bytes
 .../Images2/7b8685204636771776605bab99b08e8f.png   | Bin 82550 -> 0 bytes
 .../Images2/7cbe7cd81ce2212883741dd9b62dad18.png   | Bin 36588 -> 0 bytes
 .../Images2/8576fe8054c072a7fee53d98eeefa004.png   | Bin 39623 -> 0 bytes
 .../Images2/87ef54ccaa6b96abc30e612636bb2e90.png   | Bin 103943 -> 0 bytes
 .../Images2/9693ded0c6a9c32cb1ff33713e5d3864.png   | Bin 54885 -> 0 bytes
 .../Images2/9c254ec33125eb0ab50a6bcc0e95a18a.png   | Bin 145675 -> 0 bytes
 .../Images2/a0fb7e3474dff5c22fb3c230f73fa6f6.png   | Bin 55052 -> 0 bytes
 .../Images2/b68f441d7ac6b4814c048d35cebbb25d.png   | Bin 117177 -> 0 bytes
 .../Images2/b7feb36a0322b002f9f85f0a8003dcc1.png   | Bin 169905 -> 0 bytes
 .../Images2/ba90e28a78375103c4890cd448818ab3.png   | Bin 132653 -> 0 bytes
 .../Images2/c3f5ac1723ba9823084f529f5384440d.png   | Bin 21078 -> 0 bytes
 .../Images2/cd3ea323b238158c8a3de8acc8ec0a3f.png   | Bin 20051 -> 0 bytes
 .../Images2/d0fe37b4aa34b0cea9e87247b7b17943.png   | Bin 115496 -> 0 bytes
 .../Images2/d1b4759745056add53a32a76d3699109.png   | Bin 23378 -> 0 bytes
 .../Images2/d9bab9306cc28ecdf8d3679ecfc224d4.png   | Bin 97351 -> 0 bytes
 .../Images2/da0cf9cb7b27dac266435b5f6ad1cd82.png   | Bin 45877 -> 0 bytes
 .../Images2/de301f8f21c1735c5e018188d685ad74.png   | Bin 53369 -> 0 bytes
 .../Images2/e7e2a98ce1f03d228c7c2d782b076d53.png   | Bin 81483 -> 0 bytes
 .../Images2/f395c9cc338d85e258485658290bf365.png   | Bin 43688 -> 0 bytes
 .../Images2/f6fa083cab060a5adc9d483b37d040f5.png   | Bin 60331 -> 0 bytes
 .../Images2/fb952c266ce9a8db9b9036a602e222a7.png   | Bin 131953 -> 0 bytes
 .../zh_CN/Development_MEETUP/Phase_Two/README.md   |  58 --
 .../zh_CN/Development_MEETUP/Phase_Two/chapter1.md | 371 ------------
 .../zh_CN/Development_MEETUP/Phase_Two/chapter2.md | 251 --------
 .../zh_CN/Development_MEETUP/README.md             |   1 -
 .../ElasticSearch_User_Manual.md                   |   1 -
 .../Hive_User_Manual.md                            |  81 ---
 .../JDBC_User_Manual.md                            |  53 --
 .../MLSQL_User_Manual.md                           |   1 -
 .../Presto_User_Manual.md                          |   1 -
 .../Python_User_Manual.md                          |  61 --
 .../zh_CN/Engine_Usage_Documentations/README.md    |  25 -
 .../Shell_User_Manual.md                           |  57 --
 .../Spark_User_Manual.md                           |  91 ---
 .../zh_CN/Images/Architecture/AppManager-02.png    | Bin 701283 -> 0 bytes
 .../zh_CN/Images/Architecture/AppManager-03.png    | Bin 69489 -> 0 bytes
 .../Commons/linkis-message-scheduler.png           | Bin 26987 -> 0 bytes
 .../Images/Architecture/Commons/linkis-rpc.png     | Bin 23403 -> 0 bytes
 .../Architecture/EngineConn/engineconn-01.png      | Bin 157753 -> 0 bytes
 .../EngineConnPlugin/engine_conn_plugin_cycle.png  | Bin 49326 -> 0 bytes
 .../EngineConnPlugin/engine_conn_plugin_global.png | Bin 32292 -> 0 bytes
 .../EngineConnPlugin/engine_conn_plugin_load.png   | Bin 74821 -> 0 bytes
 ...26\260\345\242\236\346\265\201\347\250\213.png" | Bin 59893 -> 0 bytes
 .../Gateway/gateway_server_dispatcher.png          | Bin 83743 -> 0 bytes
 .../Architecture/Gateway/gateway_server_global.png | Bin 85272 -> 0 bytes
 .../Architecture/Gateway/gatway_websocket.png      | Bin 37769 -> 0 bytes
 .../Physical\346\240\221.png"                      | Bin 79471 -> 0 bytes
 ...56\265\346\265\201\347\250\213\345\233\276.png" | Bin 31078 -> 0 bytes
 ...56\265\346\265\201\347\250\213\345\233\276.png" | Bin 12946 -> 0 bytes
 ...16\267\345\217\226\346\265\201\347\250\213.png" | Bin 41007 -> 0 bytes
 ...16\222\346\265\201\347\250\213\345\233\276.png" | Bin 31095 -> 0 bytes
 ...75\223\346\265\201\347\250\213\345\233\276.png" | Bin 231192 -> 0 bytes
 .../LabelManager/label_manager_builder.png         | Bin 62978 -> 0 bytes
 .../LabelManager/label_manager_global.png          | Bin 14988 -> 0 bytes
 .../LabelManager/label_manager_scorer.png          | Bin 72977 -> 0 bytes
 .../Images/Architecture/Linkis1.0-architecture.png | Bin 221751 -> 0 bytes
 .../Architecture/LinkisManager/AppManager-01.png   | Bin 69489 -> 0 bytes
 .../Architecture/LinkisManager/LabelManager-01.png | Bin 39221 -> 0 bytes
 .../LinkisManager/LinkisManager-01.png             | Bin 183082 -> 0 bytes
 .../LinkisManager/ResourceManager-01.png           | Bin 71086 -> 0 bytes
 ...cement\346\236\266\346\236\204\345\233\276.png" | Bin 47158 -> 0 bytes
 .../ContextService/linkis-contextservice-01.png    | Bin 22692 -> 0 bytes
 .../ContextService/linkis-contextservice-02.png    | Bin 10655 -> 0 bytes
 .../linkis-contextservice-cache-01.png             | Bin 11881 -> 0 bytes
 .../linkis-contextservice-cache-02.png             | Bin 23902 -> 0 bytes
 .../linkis-contextservice-cache-03.png             | Bin 109334 -> 0 bytes
 .../linkis-contextservice-cache-04.png             | Bin 36161 -> 0 bytes
 .../linkis-contextservice-cache-05.png             | Bin 2265 -> 0 bytes
 .../linkis-contextservice-client-01.png            | Bin 54438 -> 0 bytes
 .../linkis-contextservice-client-02.png            | Bin 93036 -> 0 bytes
 .../linkis-contextservice-client-03.png            | Bin 34839 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-01.png | Bin 38439 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-02.png | Bin 21982 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-03.png | Bin 91788 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-04.png | Bin 40733 -> 0 bytes
 .../linkis-contextservice-listener-01.png          | Bin 24414 -> 0 bytes
 .../linkis-contextservice-listener-02.png          | Bin 46152 -> 0 bytes
 .../linkis-contextservice-listener-03.png          | Bin 32597 -> 0 bytes
 .../linkis-contextservice-persistence-01.png       | Bin 198797 -> 0 bytes
 .../linkis-contextservice-search-01.png            | Bin 33731 -> 0 bytes
 .../linkis-contextservice-search-02.png            | Bin 26768 -> 0 bytes
 .../linkis-contextservice-search-03.png            | Bin 33312 -> 0 bytes
 .../linkis-contextservice-search-04.png            | Bin 25192 -> 0 bytes
 .../linkis-contextservice-search-05.png            | Bin 24757 -> 0 bytes
 .../linkis-contextservice-search-06.png            | Bin 29923 -> 0 bytes
 .../linkis-contextservice-search-07.png            | Bin 30013 -> 0 bytes
 .../linkis-contextservice-service-01.png           | Bin 56235 -> 0 bytes
 .../linkis-contextservice-service-02.png           | Bin 73463 -> 0 bytes
 .../linkis-contextservice-service-03.png           | Bin 23477 -> 0 bytes
 .../linkis-contextservice-service-04.png           | Bin 27387 -> 0 bytes
 .../zh_CN/Images/Architecture/bml-01.png           | Bin 78801 -> 0 bytes
 .../zh_CN/Images/Architecture/bml-02.png           | Bin 55227 -> 0 bytes
 .../zh_CN/Images/Architecture/linkis-client-01.png | Bin 88633 -> 0 bytes
 .../Architecture/linkis-computation-gov-01.png     | Bin 89527 -> 0 bytes
 .../Architecture/linkis-computation-gov-02.png     | Bin 179368 -> 0 bytes
 .../Architecture/linkis-engineConnPlugin-01.png    | Bin 21864 -> 0 bytes
 .../Images/Architecture/linkis-entrance-01.png     | Bin 33102 -> 0 bytes
 .../zh_CN/Images/Architecture/linkis-intro-01.jpg  | Bin 341150 -> 0 bytes
 .../zh_CN/Images/Architecture/linkis-intro-02.jpg  | Bin 289769 -> 0 bytes
 .../Architecture/linkis-microservice-gov-01.png    | Bin 89404 -> 0 bytes
 .../Architecture/linkis-microservice-gov-03.png    | Bin 60074 -> 0 bytes
 .../linkis-computation-orchestrator-01.png         | Bin 53527 -> 0 bytes
 .../linkis-computation-orchestrator-02.png         | Bin 77543 -> 0 bytes
 .../orchestrator/execution/execution.png           | Bin 29487 -> 0 bytes
 .../orchestrator/execution/execution01.png         | Bin 55090 -> 0 bytes
 .../linkis_orchestrator_architecture.png           | Bin 51935 -> 0 bytes
 .../orchestrator/operation/operation_class.png     | Bin 36916 -> 0 bytes
 .../orchestrator/overall/Orchestrator01.png        | Bin 38900 -> 0 bytes
 .../orchestrator/overall/Orchestrator_Logical.png  | Bin 46510 -> 0 bytes
 .../orchestrator/overall/Orchestrator_Physical.png | Bin 52228 -> 0 bytes
 .../orchestrator/overall/Orchestrator_arc.png      | Bin 32345 -> 0 bytes
 .../orchestrator/overall/Orchestrator_ast.png      | Bin 24733 -> 0 bytes
 .../orchestrator/overall/Orchestrator_cache.png    | Bin 96643 -> 0 bytes
 .../orchestrator/overall/Orchestrator_command.png  | Bin 29349 -> 0 bytes
 .../overall/Orchestrator_computation.png           | Bin 64070 -> 0 bytes
 .../orchestrator/overall/Orchestrator_progress.png | Bin 92726 -> 0 bytes
 .../orchestrator/overall/Orchestrator_reheat.png   | Bin 82286 -> 0 bytes
 .../overall/Orchestrator_transication.png          | Bin 63174 -> 0 bytes
 .../orchestrator/overall/orchestrator_entity.png   | Bin 29307 -> 0 bytes
 .../reheater/linkis-orchestrator-reheater-01.png   | Bin 22631 -> 0 bytes
 .../transform/linkis-orchestrator-transform-01.png | Bin 21241 -> 0 bytes
 .../zh_CN/Images/Architecture/rm-01.png            | Bin 183082 -> 0 bytes
 .../zh_CN/Images/Architecture/rm-02.png            | Bin 71086 -> 0 bytes
 .../zh_CN/Images/Architecture/rm-03.png            | Bin 52466 -> 0 bytes
 .../zh_CN/Images/Architecture/rm-04.png            | Bin 36324 -> 0 bytes
 .../zh_CN/Images/Architecture/rm-05.png            | Bin 34066 -> 0 bytes
 .../zh_CN/Images/Architecture/rm-06.png            | Bin 44105 -> 0 bytes
 .../zh_CN/Images/EngineUsage/hive-config.png       | Bin 127024 -> 0 bytes
 .../zh_CN/Images/EngineUsage/hive-run.png          | Bin 94294 -> 0 bytes
 .../zh_CN/Images/EngineUsage/jdbc-conf.png         | Bin 128381 -> 0 bytes
 .../zh_CN/Images/EngineUsage/jdbc-run.png          | Bin 56438 -> 0 bytes
 .../zh_CN/Images/EngineUsage/pyspakr-run.png       | Bin 124979 -> 0 bytes
 .../zh_CN/Images/EngineUsage/python-config.png     | Bin 129842 -> 0 bytes
 .../zh_CN/Images/EngineUsage/python-run.png        | Bin 89641 -> 0 bytes
 .../zh_CN/Images/EngineUsage/queue-set.png         | Bin 115340 -> 0 bytes
 .../zh_CN/Images/EngineUsage/scala-run.png         | Bin 125060 -> 0 bytes
 .../zh_CN/Images/EngineUsage/shell-run.png         | Bin 209553 -> 0 bytes
 .../zh_CN/Images/EngineUsage/spark-conf.png        | Bin 178501 -> 0 bytes
 .../zh_CN/Images/EngineUsage/sparksql-run.png      | Bin 121699 -> 0 bytes
 .../zh_CN/Images/EngineUsage/workflow.png          | Bin 151481 -> 0 bytes
 .../zh_CN/Images/Introduction/introduction.png     | Bin 90686 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/Q&A.png      | Bin 161638 -> 0 bytes
 .../Tuning_and_Troubleshooting/code-fix-01.png     | Bin 199523 -> 0 bytes
 .../Tuning_and_Troubleshooting/db-config-01.png    | Bin 391789 -> 0 bytes
 .../Tuning_and_Troubleshooting/db-config-02.png    | Bin 60334 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-01.png | Bin 6168 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-02.png | Bin 62496 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-03.png | Bin 32875 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-04.png | Bin 111758 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-05.png | Bin 52040 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-06.png | Bin 63668 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-07.png | Bin 316176 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-08.png | Bin 27722 -> 0 bytes
 .../Tuning_and_Troubleshooting/hive-config-01.png  | Bin 76327 -> 0 bytes
 .../linkis-exception-01.png                        | Bin 1199628 -> 0 bytes
 .../linkis-exception-02.png                        | Bin 1366293 -> 0 bytes
 .../linkis-exception-03.png                        | Bin 646836 -> 0 bytes
 .../linkis-exception-04.png                        | Bin 2965676 -> 0 bytes
 .../linkis-exception-05.png                        | Bin 454949 -> 0 bytes
 .../linkis-exception-06.png                        | Bin 869492 -> 0 bytes
 .../linkis-exception-07.png                        | Bin 2249882 -> 0 bytes
 .../linkis-exception-08.png                        | Bin 1191728 -> 0 bytes
 .../linkis-exception-09.png                        | Bin 1008341 -> 0 bytes
 .../linkis-exception-10.png                        | Bin 322110 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-01.png    | Bin 115010 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-02.png    | Bin 576911 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-03.png    | Bin 654609 -> 0 bytes
 .../searching_keywords.png                         | Bin 102094 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-01.png  | Bin 74682 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-02.png  | Bin 330735 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-03.png  | Bin 1624375 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-04.png  | Bin 803920 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-05.png  | Bin 179543 -> 0 bytes
 Linkis-Doc-master/zh_CN/Images/after_linkis_cn.png | Bin 645519 -> 0 bytes
 .../zh_CN/Images/before_linkis_cn.png              | Bin 332201 -> 0 bytes
 .../deployment/Linkis1.0_combined_eureka.png       | Bin 134418 -> 0 bytes
 Linkis-Doc-master/zh_CN/README.md                  |  87 ---
 Linkis-Doc-master/zh_CN/SUMMARY.md                 |  69 ---
 .../Tuning_and_Troubleshooting/Configuration.md    | 220 -------
 .../zh_CN/Tuning_and_Troubleshooting/Q&A.md        | 257 --------
 .../zh_CN/Tuning_and_Troubleshooting/README.md     | 112 ----
 .../zh_CN/Tuning_and_Troubleshooting/Tuning.md     |  50 --
 ...\247\345\210\2601.0\346\214\207\345\215\227.md" |  73 ---
 .../zh_CN/Upgrade_Documents/README.md              |   6 -
 .../zh_CN/User_Manual/How_To_Use_Linkis.md         |  20 -
 ...74\225\346\223\216\344\277\241\346\201\257.png" | Bin 89529 -> 0 bytes
 ...56\241\347\220\206\347\225\214\351\235\242.png" | Bin 43765 -> 0 bytes
 ...74\226\350\276\221\347\225\214\351\235\242.png" | Bin 64470 -> 0 bytes
 ...63\250\345\206\214\344\270\255\345\277\203.png" | Bin 327966 -> 0 bytes
 ...37\245\350\257\242\346\214\211\351\222\256.png" | Bin 81788 -> 0 bytes
 ...16\206\345\217\262\347\225\214\351\235\242.png" | Bin 82340 -> 0 bytes
 ...17\230\351\207\217\347\225\214\351\235\242.png" | Bin 40073 -> 0 bytes
 ...11\247\350\241\214\346\227\245\345\277\227.png" | Bin 114314 -> 0 bytes
 ...05\215\347\275\256\347\225\214\351\235\242.png" | Bin 79698 -> 0 bytes
 ...56\241\347\220\206\347\225\214\351\235\242.png" | Bin 39198 -> 0 bytes
 ...72\224\347\224\250\347\261\273\345\236\213.png" | Bin 108864 -> 0 bytes
 ...74\225\346\223\216\344\277\241\346\201\257.png" | Bin 41814 -> 0 bytes
 ...20\206\345\221\230\350\247\206\345\233\276.png" | Bin 80087 -> 0 bytes
 ...74\226\350\276\221\347\233\256\345\275\225.png" | Bin 89919 -> 0 bytes
 ...56\241\347\220\206\347\225\214\351\235\242.png" | Bin 49277 -> 0 bytes
 ...275\277\347\224\250\346\226\207\346\241\243.md" | 193 ------
 ...275\277\347\224\250\346\226\207\346\241\243.md" | 389 -------------
 .../User_Manual/Linkis_Console_User_Manual.md      | 120 ----
 Linkis-Doc-master/zh_CN/User_Manual/README.md      |   8 -
 README.md                                          |  17 -
 src/assets/user/360.png => assets/360.bc39c47a.png | Bin
 .../97\347\211\251\350\201\224.2447251c.png"       | Bin
 assets/AddEngineConn.467c2210.js                   |   1 +
 assets/CliManual.8440dc3f.js                       |   1 +
 assets/ConsoleUserManual.d2af8060.js               |   1 +
 assets/DifferenceBetween1.0&0.x.7e9c261e.js        |   1 +
 .../ECM_all_engine_information.4b4099f5.png        | Bin
 .../ECM_editing_interface.a82c51cd.png             | Bin
 .../ECM_management_interface.764982ae.png          | Bin
 assets/HowToUse.212b1469.js                        |   1 +
 assets/JobSubmission.cf4b12e7.js                   |   1 +
 .../Linkis0.X_newengine_architecture.76e9d9b8.png  | Bin
 .../Linkis0.X_services_list.984b5164.png           | Bin
 .../Linkis1.0_combined_eureka.dad2589e.png         | Bin
 .../Linkis1.0_engineconn_architecture.7d420481.png | Bin
 .../Linkis1.0_newengine_architecture.e98645d5.png  | Bin
 ...Linkis1.0_newengine_initialization.6acbb6c3.png | Bin
 .../Linkis1.0_services_list.72702c4a.png           | Bin
 "assets/T3\345\207\272\350\241\214.1738b528.png"   | Bin 0 -> 6413 bytes
 assets/UserManual.905b8e9a.js                      |   1 +
 .../add_an_engineConn_flow_chart.d10a8d14.png      | Bin
 .../administrator_view.7c4869c3.png                | Bin
 .../after_linkis_en.c3ed71bf.png                   | Bin
 .../before_linkis_en.076cf10c.png                  | Bin
 .../boss\347\233\264\350\201\230.5353720c.png"     | Bin
 ...ce_name_to_view_engine_information.9b608268.png | Bin
 .../code-fix-01.620f0486.png                       | Bin
 .../db-config-01.5aa0a782.png                      | Bin
 .../db-config-02.f05b1586.png                      | Bin
 .../description.95f7a296.png                       | Bin
 assets/distributed.6a61f64e.js                     |   1 +
 .../distributed_deployment.d533f7c3.png            | Bin
 assets/download.8c6e40f3.css                       |   1 +
 assets/download.c3e47cb5.js                        |   1 +
 .../edit_directory.410557fd.png                    | Bin
 assets/engins.2a41b1a0.js                          |   1 +
 .../eureka_registration_center.261760f0.png        | Bin
 assets/event.29571be3.js                           |   1 +
 .../execution.png => assets/execution.2d8c96b7.png | Bin
 .../global_history_interface.68d7d00e.png          | Bin
 .../global_history_query_button.c9058b17.png       | Bin
 .../global_variable_interface.734e4b18.png         | Bin
 .../hive-config-01.e5d22d71.png                    | Bin
 .../incubator-logo.c3572a91.png                    | Bin
 assets/index.07e7576a.css                          |   1 +
 assets/index.2da1dc18.js                           |   1 +
 assets/index.5a6d4e60.js                           |   1 +
 assets/index.77f4f836.css                          |   1 +
 assets/index.82f016e4.css                          |   1 +
 assets/index.8d1f9740.js                           |   1 +
 assets/index.c51fb506.js                           |   1 +
 assets/index.c93f08c9.js                           |   1 +
 .../linkis-exception-01.a30b0cae.png               | Bin
 .../linkis-exception-02.c5d295a9.png               | Bin
 .../linkis-exception-03.8fc2f10f.png               | Bin
 .../linkis-exception-04.bb6736c1.png               | Bin
 .../linkis-exception-05.9b7af564.png               | Bin
 .../linkis-exception-06.ecfa4a11.png               | Bin
 .../linkis-exception-07.a1f28559.png               | Bin
 .../linkis-exception-08.dcdf1ce1.png               | Bin
 .../linkis-exception-09.f06ff470.png               | Bin
 .../linkis-exception-10.49a3d1ba.png               | Bin
 assets/linkis.d0790396.js                          |   1 +
 src/assets/logo.png => assets/logo.fb11029b.png    | Bin
 assets/main.3104c8a7.js                            |   1 +
 .../microservice_management_interface.9a76ac41.png | Bin
 assets/mobtech.b333dc91.png                        | Bin 0 -> 11676 bytes
 .../new_application_type.90ca0c6b.png              | Bin
 .../orchestrate.b395b673.png                       | Bin
 .../overall.png => assets/overall.d0b560e6.png     | Bin
 .../page-show-01.f6ac5799.png                      | Bin
 .../page-show-02.9d59cdcb.png                      | Bin
 .../page-show-03.63498698.png                      | Bin
 .../parameter_configuration_interface.6160c166.png | Bin
 .../physical_tree.6d05f37c.png                     | Bin
 assets/plugin-vue_export-helper.5a098b48.js        |   1 +
 .../queue_set.png => assets/queue_set.349ccfa6.png | Bin
 .../resource_management_interface.1334783f.png     | Bin
 .../result_acquisition.ccd9e593.png                | Bin
 .../shell-error-01.2e9d62b8.png                    | Bin
 .../shell-error-02.fba39b7b.png                    | Bin
 .../shell-error-03.666f92e3.png                    | Bin
 .../shell-error-04.910b89a7.png                    | Bin
 .../shell-error-05.f4057bcc.png                    | Bin
 .../sparksql_run.115bb5a7.png                      | Bin
 assets/structure.1bc4dbfc.js                       |   1 +
 .../submission.22e30fbd.png                        | Bin
 ...ask_execution_log_of_a_single_task.cf40fba8.png | Bin
 assets/team.13ce5e55.css                           |   1 +
 assets/team.c0178c87.js                            |   1 +
 assets/utils.7ca2fb6d.js                           |   1 +
 assets/vendor.12a5b039.js                          |  21 +
 .../workflow.png => assets/workflow.4526f490.png   | Bin
 ...4\270\234\346\226\271\351\200\232.4814e53c.png" | Bin
 ...5\275\251\347\247\221\346\212\200.d1ffcc7d.png" | Bin
 ...5\233\275\347\224\265\347\247\221.864feafc.jpg" | Bin
 ...4\277\241\346\234\215\345\212\241.6242b949.png" | Bin 0 -> 13177 bytes
 ...1\200\232\344\272\221\344\273\223.a785e23f.png" | Bin
 ...5\256\236\351\252\214\345\256\244.46d52eec.png" | Bin 0 -> 11054 bytes
 ...5\276\222\347\247\221\346\212\200.d6b063f3.png" | Bin
 .../\344\276\235\345\233\276.e1935876.png"         | Bin
 ...6\212\200\345\244\247\345\255\246.79502b9d.jpg" | Bin
 ...5\223\227\345\225\246\345\225\246.045c3b9e.jpg" | Bin
 ...5\244\226\345\220\214\345\255\246.9c81d026.png" | Bin
 ...5\244\251\347\277\274\344\272\221.ee336756.png" | Bin
 .../\345\271\263\345\256\211.d0212a59.png"         | Bin
 ...5\244\247\346\225\260\346\215\256.d21c18fc.png" | Bin 0 -> 7862 bytes
 ...1\231\220\345\205\254\345\217\270.66cf4318.png" | Bin
 ...1\255\202\347\275\221\347\273\234.3ec071b8.png" | Bin
 ...5\255\220\345\210\206\346\234\237.55aa406b.png" | Bin
 ...5\272\267\345\250\201\350\247\206.70f8122b.png" | Bin
 ...6\203\263\346\261\275\350\275\246.0123a918.png" | Bin
 ...7\231\276\346\234\233\344\272\221.c2c1293f.png" | Bin
 ...5\210\233\345\225\206\345\237\216.294fde8b.png" | Bin
 ...0\261\241\344\272\221\350\205\276.7417b5e6.png" | Bin
 ...5\210\233\346\231\272\350\236\215.188edcec.png" | Bin
 ...5\244\251\344\277\241\346\201\257.23b0d23c.png" | Bin
 ...4\275\263\347\224\237\346\264\273.b508c1dc.jpg" | Bin
 "assets/\350\215\243\350\200\200.ceda8b1e.png"     | Bin 0 -> 7780 bytes
 ...6\221\251\350\200\266\344\272\221.63ed5828.png" | Bin 0 -> 19705 bytes
 ...6\235\245\346\261\275\350\275\246.be672a01.jpg" | Bin
 ...6\212\200\345\244\247\345\255\246.3762b76e.jpg" | Bin
 ...7\202\271\350\275\257\344\273\266.389df8d5.png" | Bin
 favicon.ico                                        | Bin 0 -> 1595 bytes
 index.html                                         |   5 +-
 info.txt                                           |   5 -
 package-lock.json                                  | 647 ---------------------
 package.json                                       |  21 -
 public/favicon.ico                                 | Bin 4286 -> 0 bytes
 src/App.vue                                        | 249 --------
 src/assets/docs/EngineUsage/hive-config.png        | Bin 44717 -> 0 bytes
 src/assets/docs/EngineUsage/hive-run.png           | Bin 31403 -> 0 bytes
 src/assets/docs/EngineUsage/jdbc-conf.png          | Bin 46113 -> 0 bytes
 src/assets/docs/EngineUsage/jdbc-run.png           | Bin 21937 -> 0 bytes
 src/assets/docs/EngineUsage/pyspakr-run.png        | Bin 43552 -> 0 bytes
 src/assets/docs/EngineUsage/python-config.png      | Bin 47021 -> 0 bytes
 src/assets/docs/EngineUsage/python-run.png         | Bin 61451 -> 0 bytes
 src/assets/docs/EngineUsage/queue-set.png          | Bin 41298 -> 0 bytes
 src/assets/docs/EngineUsage/scala-run.png          | Bin 43959 -> 0 bytes
 src/assets/docs/EngineUsage/shell-run.png          | Bin 100312 -> 0 bytes
 src/assets/docs/EngineUsage/spark-conf.png         | Bin 53397 -> 0 bytes
 src/assets/docs/EngineUsage/sparksql-run.png       | Bin 46611 -> 0 bytes
 src/assets/docs/EngineUsage/workflow.png           | Bin 51259 -> 0 bytes
 src/assets/docs/Linkis_1.0_architecture.png        | Bin 316746 -> 0 bytes
 src/assets/docs/Tuning_and_Troubleshooting/Q&A.png | Bin 72259 -> 0 bytes
 .../Tuning_and_Troubleshooting/code-fix-01.png     | Bin 61855 -> 0 bytes
 .../Tuning_and_Troubleshooting/db-config-01.png    | Bin 157843 -> 0 bytes
 .../Tuning_and_Troubleshooting/db-config-02.png    | Bin 22153 -> 0 bytes
 .../docs/Tuning_and_Troubleshooting/debug-01.png   | Bin 3258 -> 0 bytes
 .../docs/Tuning_and_Troubleshooting/debug-02.png   | Bin 25521 -> 0 bytes
 .../docs/Tuning_and_Troubleshooting/debug-03.png   | Bin 14953 -> 0 bytes
 .../docs/Tuning_and_Troubleshooting/debug-04.png   | Bin 34622 -> 0 bytes
 .../docs/Tuning_and_Troubleshooting/debug-05.png   | Bin 20848 -> 0 bytes
 .../docs/Tuning_and_Troubleshooting/debug-06.png   | Bin 25477 -> 0 bytes
 .../docs/Tuning_and_Troubleshooting/debug-07.png   | Bin 113342 -> 0 bytes
 .../docs/Tuning_and_Troubleshooting/debug-08.png   | Bin 12338 -> 0 bytes
 .../Tuning_and_Troubleshooting/hive-config-01.png  | Bin 27332 -> 0 bytes
 .../linkis-exception-01.png                        | Bin 457236 -> 0 bytes
 .../linkis-exception-02.png                        | Bin 524390 -> 0 bytes
 .../linkis-exception-03.png                        | Bin 264782 -> 0 bytes
 .../linkis-exception-04.png                        | Bin 1014902 -> 0 bytes
 .../linkis-exception-05.png                        | Bin 207746 -> 0 bytes
 .../linkis-exception-06.png                        | Bin 348016 -> 0 bytes
 .../linkis-exception-07.png                        | Bin 842448 -> 0 bytes
 .../linkis-exception-08.png                        | Bin 499442 -> 0 bytes
 .../linkis-exception-09.png                        | Bin 442648 -> 0 bytes
 .../linkis-exception-10.png                        | Bin 149801 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-01.png    | Bin 39986 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-02.png    | Bin 220102 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-03.png    | Bin 230234 -> 0 bytes
 .../searching_keywords.png                         | Bin 53652 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-01.png  | Bin 30629 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-02.png  | Bin 117077 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-03.png  | Bin 516777 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-04.png  | Bin 318990 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-05.png  | Bin 60031 -> 0 bytes
 .../docs/Tunning_And_Troubleshooting/debug-01.png  | Bin 3258 -> 0 bytes
 .../docs/Tunning_And_Troubleshooting/debug-02.png  | Bin 25521 -> 0 bytes
 .../docs/Tunning_And_Troubleshooting/debug-03.png  | Bin 14953 -> 0 bytes
 .../docs/Tunning_And_Troubleshooting/debug-04.png  | Bin 34622 -> 0 bytes
 .../docs/Tunning_And_Troubleshooting/debug-05.png  | Bin 20848 -> 0 bytes
 .../docs/Tunning_And_Troubleshooting/debug-06.png  | Bin 25477 -> 0 bytes
 .../docs/Tunning_And_Troubleshooting/debug-07.png  | Bin 113342 -> 0 bytes
 .../docs/Tunning_And_Troubleshooting/debug-08.png  | Bin 12338 -> 0 bytes
 .../add_an_EngineConn_flow_chart.png               | Bin 59893 -> 0 bytes
 .../docs/architecture/EngineConn/engineconn-01.png | Bin 157753 -> 0 bytes
 .../Gateway/gateway_server_dispatcher.png          | Bin 47910 -> 0 bytes
 .../architecture/Gateway/gateway_server_global.png | Bin 36652 -> 0 bytes
 .../docs/architecture/Gateway/gatway_websocket.png | Bin 16292 -> 0 bytes
 .../LabelManager/label_manager_builder.png         | Bin 62978 -> 0 bytes
 .../LabelManager/label_manager_global.png          | Bin 14988 -> 0 bytes
 .../LabelManager/label_manager_scorer.png          | Bin 72977 -> 0 bytes
 .../docs/architecture/Linkis1.0_architecture.png   | Bin 72168 -> 0 bytes
 .../ContextService/linkis-contextservice-01.png    | Bin 9188 -> 0 bytes
 .../ContextService/linkis-contextservice-02.png    | Bin 4953 -> 0 bytes
 .../linkis-contextservice-cache-01.png             | Bin 5500 -> 0 bytes
 .../linkis-contextservice-cache-02.png             | Bin 11546 -> 0 bytes
 .../linkis-contextservice-cache-03.png             | Bin 53416 -> 0 bytes
 .../linkis-contextservice-cache-04.png             | Bin 15785 -> 0 bytes
 .../linkis-contextservice-cache-05.png             | Bin 1488 -> 0 bytes
 .../linkis-contextservice-client-01.png            | Bin 18839 -> 0 bytes
 .../linkis-contextservice-client-02.png            | Bin 30023 -> 0 bytes
 .../linkis-contextservice-client-03.png            | Bin 11690 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-01.png | Bin 17605 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-02.png | Bin 10781 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-03.png | Bin 41714 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-04.png | Bin 17550 -> 0 bytes
 .../linkis-contextservice-listener-01.png          | Bin 14209 -> 0 bytes
 .../linkis-contextservice-listener-02.png          | Bin 21055 -> 0 bytes
 .../linkis-contextservice-listener-03.png          | Bin 17902 -> 0 bytes
 .../linkis-contextservice-persistence-01.png       | Bin 107735 -> 0 bytes
 .../linkis-contextservice-search-01.png            | Bin 11874 -> 0 bytes
 .../linkis-contextservice-search-02.png            | Bin 8266 -> 0 bytes
 .../linkis-contextservice-search-03.png            | Bin 11321 -> 0 bytes
 .../linkis-contextservice-search-04.png            | Bin 9101 -> 0 bytes
 .../linkis-contextservice-search-05.png            | Bin 9133 -> 0 bytes
 .../linkis-contextservice-search-06.png            | Bin 11334 -> 0 bytes
 .../linkis-contextservice-search-07.png            | Bin 11391 -> 0 bytes
 .../linkis-contextservice-service-01.png           | Bin 27470 -> 0 bytes
 .../linkis-contextservice-service-02.png           | Bin 37730 -> 0 bytes
 .../linkis-contextservice-service-03.png           | Bin 12269 -> 0 bytes
 .../linkis-contextservice-service-04.png           | Bin 13462 -> 0 bytes
 src/assets/docs/architecture/bml_02.png            | Bin 55227 -> 0 bytes
 .../architecture/linkis_engineconnplugin_01.png    | Bin 8146 -> 0 bytes
 src/assets/docs/architecture/linkis_intro_01.png   | Bin 142195 -> 0 bytes
 src/assets/docs/architecture/linkis_intro_02.png   | Bin 102080 -> 0 bytes
 .../architecture/linkis_microservice_gov_01.png    | Bin 46380 -> 0 bytes
 .../architecture/linkis_microservice_gov_03.png    | Bin 30388 -> 0 bytes
 .../docs/architecture/linkis_publicservice_01.png  | Bin 25269 -> 0 bytes
 .../publicenhencement_architecture.png             | Bin 24844 -> 0 bytes
 .../docs/deploy/Linkis1.0_combined_eureka.png      | Bin 55811 -> 0 bytes
 src/assets/docs/wedatasphere_contact_01.png        | Bin 217762 -> 0 bytes
 src/assets/docs/wedatasphere_stack_Linkis.png      | Bin 203466 -> 0 bytes
 src/assets/fqa/Q&A.png                             | Bin 72259 -> 0 bytes
 src/assets/fqa/debug-01.png                        | Bin 3258 -> 0 bytes
 src/assets/fqa/debug-02.png                        | Bin 25521 -> 0 bytes
 src/assets/fqa/debug-03.png                        | Bin 14953 -> 0 bytes
 src/assets/fqa/debug-04.png                        | Bin 34622 -> 0 bytes
 src/assets/fqa/debug-05.png                        | Bin 20848 -> 0 bytes
 src/assets/fqa/debug-06.png                        | Bin 25477 -> 0 bytes
 src/assets/fqa/debug-07.png                        | Bin 113342 -> 0 bytes
 src/assets/fqa/debug-08.png                        | Bin 12338 -> 0 bytes
 src/assets/fqa/searching_keywords.png              | Bin 53652 -> 0 bytes
 src/assets/home/after_linkis_zh.png                | Bin 188079 -> 0 bytes
 src/assets/home/before_linkis_zh.png               | Bin 101665 -> 0 bytes
 src/assets/image/github_user.png                   | Bin 4677 -> 0 bytes
 "src/assets/user/T3\345\207\272\350\241\214.png"   | Bin 7258 -> 0 bytes
 src/assets/user/mobtech..png                       | Bin 1829 -> 0 bytes
 ...70\207\347\247\221\351\207\207\347\255\221.png" | Bin 2468 -> 0 bytes
 ...60\221\347\224\237\351\223\266\350\241\214.jpg" | Bin 16640 -> 0 bytes
 ...70\255\345\233\275\347\224\265\344\277\241.png" | Bin 6468 -> 0 bytes
 ...34\211\351\231\220\345\205\254\345\217\270.png" | Bin 10006 -> 0 bytes
 ...61\237\345\256\236\351\252\214\345\256\244.png" | Bin 13145 -> 0 bytes
 ...72\244\351\200\232\351\223\266\350\241\214.jpg" | Bin 8099 -> 0 bytes
 ...72\254\344\270\234\346\225\260\347\247\221.jpg" | Bin 7895 -> 0 bytes
 ...77\241\347\224\250\347\224\237\346\264\273.png" | Bin 3978 -> 0 bytes
 ...14\273\344\277\235\347\247\221\346\212\200.png" | Bin 2083 -> 0 bytes
 ...72\221\345\276\231\347\247\221\346\212\200.png" | Bin 15448 -> 0 bytes
 ...03\275\345\244\247\346\225\260\346\215\256.png" | Bin 13462 -> 0 bytes
 ...13\233\345\225\206\351\223\266\350\241\214.jpg" | Bin 10462 -> 0 bytes
 ...31\276\344\277\241\351\223\266\350\241\214.jpg" | Bin 6739 -> 0 bytes
 ...76\216\345\233\242\347\202\271\350\257\204.jpg" | Bin 10596 -> 0 bytes
 ...05\276\350\256\257\350\264\242\347\273\217.jpg" | Bin 14500 -> 0 bytes
 ...20\250\346\221\251\350\200\266\344\272\221.png" | Bin 10090 -> 0 bytes
 ...02\256\346\224\277\351\223\266\350\241\214.jpg" | Bin 14657 -> 0 bytes
 src/components/HelloWorld.vue                      |  40 --
 src/docs/architecture/AddEngineConn_en.md          | 105 ----
 src/docs/architecture/AddEngineConn_zh.md          | 111 ----
 .../architecture/DifferenceBetween1.0&0.x_en.md    |  50 --
 .../architecture/DifferenceBetween1.0&0.x_zh.md    |  98 ----
 src/docs/architecture/JobSubmission_en.md          | 138 -----
 src/docs/architecture/JobSubmission_zh.md          | 165 ------
 src/docs/deploy/distributed_en.md                  |  98 ----
 src/docs/deploy/distributed_zh.md                  | 100 ----
 src/docs/deploy/engins_en.md                       |  82 ---
 src/docs/deploy/engins_zh.md                       | 106 ----
 src/docs/deploy/linkis_en.md                       | 246 --------
 src/docs/deploy/linkis_zh.md                       | 256 --------
 src/docs/deploy/main_en.md                         |   1 -
 src/docs/deploy/main_zh.md                         |   1 -
 src/docs/deploy/structure_en.md                    | 198 -------
 src/docs/deploy/structure_zh.md                    | 186 ------
 src/docs/manual/CliManual_en.md                    | 193 ------
 src/docs/manual/CliManual_zh.md                    | 193 ------
 src/docs/manual/ConsoleUserManual_en.md            | 120 ----
 src/docs/manual/ConsoleUserManual_zh.md            | 120 ----
 src/docs/manual/HowToUse_en.md                     |  28 -
 src/docs/manual/HowToUse_zh.md                     |  20 -
 src/docs/manual/UserManual_en.md                   | 400 -------------
 src/docs/manual/UserManual_zh.md                   | 389 -------------
 src/i18n/en.json                                   |  64 --
 src/i18n/index.js                                  |  48 --
 src/i18n/zh.json                                   |  63 --
 src/js/config.js                                   |   9 -
 src/js/utils.js                                    |  10 -
 src/main.js                                        |  21 -
 src/pages/blog/AddEngineConn_en.md                 | 105 ----
 src/pages/blog/AddEngineConn_zh.md                 | 111 ----
 src/pages/blog/blogdata_en.js                      |  13 -
 src/pages/blog/blogdata_zh.js                      |  13 -
 src/pages/blog/event.vue                           |  38 --
 src/pages/blog/index.vue                           |  64 --
 src/pages/docs/architecture/AddEngineConn.vue      |  13 -
 .../docs/architecture/DifferenceBetween1.0&0.x.vue |  13 -
 src/pages/docs/architecture/JobSubmission.vue      |  13 -
 src/pages/docs/deploy/distributed.vue              |  13 -
 src/pages/docs/deploy/engins.vue                   |  13 -
 src/pages/docs/deploy/linkis.vue                   |  13 -
 src/pages/docs/deploy/main.vue                     |  13 -
 src/pages/docs/deploy/structure.vue                |  13 -
 src/pages/docs/docsdata_en.js                      |  62 --
 src/pages/docs/docsdata_zh.js                      |  62 --
 src/pages/docs/index.vue                           | 105 ----
 src/pages/docs/manual/CliManual.vue                |  13 -
 src/pages/docs/manual/ConsoleUserManual.vue        |  13 -
 src/pages/docs/manual/HowToUse.vue                 |  13 -
 src/pages/docs/manual/UserManual.vue               |  13 -
 src/pages/download.vue                             |  64 --
 src/pages/faq/faq_en.md                            | 255 --------
 src/pages/faq/faq_zh.md                            | 257 --------
 src/pages/faq/index.vue                            |  46 --
 src/pages/home/data.js                             | 585 -------------------
 src/pages/home/img.js                              |  50 --
 src/pages/home/index.vue                           | 232 --------
 src/pages/team/team.vue                            | 124 ----
 src/pages/team/teamdata_en.js                      | 130 -----
 src/pages/team/teamdata_zh.js                      | 130 -----
 src/router.js                                      |  91 ---
 src/style/base.less                                | 146 -----
 src/style/variable.less                            |   2 -
 vite.config.js                                     |  16 -
 804 files changed, 52 insertions(+), 19990 deletions(-)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 23/50: ADD: 增加团队页面

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 1a19cdf159ca4dc0e5d1ce8bf19861bd2fddee10
Author: lucaszhu <lu...@webank.com>
AuthorDate: Wed Oct 13 11:33:55 2021 +0800

    ADD: 增加团队页面
---
 src/pages/team.vue | 194 ++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 193 insertions(+), 1 deletion(-)

diff --git a/src/pages/team.vue b/src/pages/team.vue
index e98fedf..65b40c8 100644
--- a/src/pages/team.vue
+++ b/src/pages/team.vue
@@ -1,3 +1,195 @@
 <template>
-  <div>team</div>
+  <div class="ctn-block team-page">
+    <h3 class="team-title">PMC</h3>
+    <p class="team-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in Github release page.</p>
+    <ul class="character-list">
+      <li class="character-item text-center">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item text-center">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item text-center">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item text-center">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item text-center">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item text-center">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item text-center">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+    </ul>
+    <h3 class="team-title">Committer</h3>
+    <p class="team-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in Github release page.</p>
+    <ul class="character-list committer">
+      <li class="character-item">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+    </ul>
+    <h3 class="team-title">Contributors</h3>
+    <p class="team-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in Github release page.</p>
+    <ul class="contributor-list">
+      <li class="contributor-item">apache/apisix-go-plugin-runner</li>
+    </ul>
+  </div>
 </template>
+<style lang="less" scoped>
+@import url('/src/style/variable.less');
+.team-page{
+  padding-top: 60px;
+  .team-title{
+    font-size: 24px;
+    line-height: 34px;
+  }
+  .team-desc{
+    color: @enhance-color;
+    font-weight: 400;
+  }
+  .contributor-list{
+    padding: 20px 0 40px;
+    .contributor-item{
+      display: inline-block;
+      margin-right: 20px;
+      margin-bottom: 20px;
+      padding: 16px 16px 16px 48px;
+      background-size: 24px;
+      background-position: 16px center;
+      background-repeat: no-repeat;
+      color: @enhance-color;
+      border: 1px solid rgba(15,18,34,0.20);
+      border-radius: 4px;
+      &:last-child{
+        margin-right: 0;
+      }
+    }
+  }
+  .character-list {
+    display: grid;
+    grid-template-columns: repeat(6, 1fr);
+    grid-column-gap: 20px;
+    grid-row-gap: 20px;
+    padding: 20px 0 60px;
+    &.committer{
+      grid-template-columns: repeat(5, 224px);
+      .character-item{
+        display: flex;
+        padding: 20px;
+        align-items: center;
+        .character-avatar{
+          width: 60px;
+          height: 60px;
+          margin: 0;
+        }
+        .character-desc{
+          flex: 1;
+          padding-left: 16px;
+          min-width: 0;
+        }
+      }
+    }
+    .character-item{
+      border: 1px solid rgba(15,18,34,0.20);
+      border-radius: 4px;
+      // 辅助处理文字溢出
+      min-width: 0;
+      padding: 0 20px;
+      .character-avatar{
+        width: 120px;
+        height: 120px;
+        margin: 30px auto 10px;
+        background: #D8D8D8;
+        border-radius: 50%;
+      }
+      .character-name{
+        color: @enhance-color;
+        line-height: 24px;
+        font-size: 16px;
+        white-space: nowrap;
+        overflow: hidden;
+        text-overflow: ellipsis;
+      }
+      .character-link{
+        color: rgba(15,18,34,0.65);
+        font-weight: 400;
+        white-space: nowrap;
+        overflow: hidden;
+        text-overflow: ellipsis;
+      }
+    }
+  }
+}
+</style>
+

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 10/50: UPDATE: 优化首页按钮样式

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 4b22094d62b6a218809a67d38cff08d7519f19c3
Author: lucaszhu <lu...@webank.com>
AuthorDate: Thu Sep 30 15:20:00 2021 +0800

    UPDATE: 优化首页按钮样式
---
 src/pages/home.vue | 53 +++++++++++++++++++++++++++++++----------------------
 1 file changed, 31 insertions(+), 22 deletions(-)

diff --git a/src/pages/home.vue b/src/pages/home.vue
index 7fb2bdf..d5075ec 100644
--- a/src/pages/home.vue
+++ b/src/pages/home.vue
@@ -3,7 +3,7 @@
     <div class="banner text-center">
       <h1 class="home-title"><span class="apache">Apache</span> <span class="linkis">Linkis</span> <span class="badge">Incubating</span></h1>
       <p class="home-desc">Decouple the upper applications and the underlying data<br>engines by building a middleware layer.</p>
-      <div class="botton-row">
+      <div class="botton-row center">
         <a href="/" class="corner-botton black">Get Started</a>
         <a href="/" class="corner-botton white">GitHub</a>
       </div>
@@ -26,6 +26,9 @@
         <h1 class="home-block-title">Description</h1>
         <p class="home-paragraph">Linkis provides standardized interfaces (REST, JDBC, WebSocket etc.) to easily connect to various underlying engines (Spark, Presto, Flink, etc.), and acts as a proxy between the upper applications layer and underlying engines layer. </p>
         <p class="home-paragraph">Linkis is able to facilitate the connectivity, governance and orchestration capabilities of different kind of engines like OLAP, OLTP (developing), Streaming, and handle all these "computation governance" affairs in a standardized reusable way.</p>
+        <div class="botton-row">
+          <a href="/" class="corner-botton blue">Learn More</a>
+        </div>
       </div>
       <!-- <img src="" alt="description" class="description-image"> -->
     </div>
@@ -196,29 +199,35 @@
         line-height: 26px;
         font-weight: 400;
       }
+    }
 
-      .botton-row{
-        display: flex;
+    .botton-row{
+      display: flex;
+      &.center{
         justify-content: center;
-        .corner-botton{
-          margin-right: 22px;
-          padding: 0 40px;
-          height: 46px;
-          line-height: 46px;
-          border-radius: 25px;
-          &:last-child{
-            margin-right: 0;
-          }
-          &.black{
-            color: #fff;
-            background: @enhance-color;
-            border: 1px solid  @enhance-color;
-          }
-          &.white{
-            color: @enhance-color;
-            background: #fff;
-            border: 1px solid @enhance-color;
-          }
+      }
+      .corner-botton{
+        margin-right: 22px;
+        padding: 0 40px;
+        height: 46px;
+        line-height: 46px;
+        border-radius: 25px;
+        &:last-child{
+          margin-right: 0;
+        }
+        &.black{
+          color: #fff;
+          background: @enhance-color;
+          border: 1px solid  @enhance-color;
+        }
+        &.white{
+          color: @enhance-color;
+          background: #fff;
+          border: 1px solid @enhance-color;
+        }
+        &.blue{
+          color: #1A529C;
+          border: 1px solid #1A529C;
         }
       }
     }

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 26/50: ADD: 增加博客列表页

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit e911adb6f0b982049bfd9acfe4284e28d3975a1a
Author: lucaszhu <lu...@webank.com>
AuthorDate: Wed Oct 13 16:40:23 2021 +0800

    ADD: 增加博客列表页
---
 src/pages/{blog.vue => blog/event.vue} | 28 -------------------
 src/pages/blog/index.vue               | 38 ++++++++++++++++++++++++++
 src/pages/home.vue                     | 24 ----------------
 src/router.js                          |  7 ++++-
 src/style/base.less                    | 50 +++++++++++++++++++++++++++++++++-
 5 files changed, 93 insertions(+), 54 deletions(-)

diff --git a/src/pages/blog.vue b/src/pages/blog/event.vue
similarity index 71%
rename from src/pages/blog.vue
rename to src/pages/blog/event.vue
index f8e3934..6de9d72 100644
--- a/src/pages/blog.vue
+++ b/src/pages/blog/event.vue
@@ -14,34 +14,6 @@
     </div>
   </div>
 </template>
-<style lang="less" scoped>
-  .blog-ctn {
-    padding-top: 60px;
-    padding-bottom: 80px;
-
-    .blog-title {
-      font-size: 24px;
-    }
-
-    .blog-info{
-      display: flex;
-      padding: 20px 0;
-      font-size: 16px;
-      color: rgba(15,18,34,0.45);
-      &.seperator{
-        .info-item{
-          border-right: 1px solid rgba(15,18,34,0.45);
-          &:last-child{
-            border-right: 0;
-          }
-        }
-      }
-      .info-item{
-        padding: 0 20px 0 28px;
-      }
-    }
-  }
-</style>
 <script setup>
   const docs = [{
     title: '部署文档',
diff --git a/src/pages/blog/index.vue b/src/pages/blog/index.vue
new file mode 100644
index 0000000..072887c
--- /dev/null
+++ b/src/pages/blog/index.vue
@@ -0,0 +1,38 @@
+<template>
+  <div class="ctn-block reading-area blog-ctn">
+    <main class="main-content">
+      <ul class="blog-list">
+        <li class="blog-item">
+          <h1 class="blog-title">Born at China’s WeBank, now incubating in the ASF - Introducing Apache Linkis</h1>
+          <div class="blog-info">
+            <span class="info-item">enjoyyin</span>
+            <span class="info-item sperator">|</span>
+            <span class="info-item">2021-9-2</span>
+          </div>
+          <p class="blog-preview">Guangsheng Chen, the founder of Apache EventMesh, has been buzzing since the project was welcomed into the Apache Software Foundation (ASF)’s incubator in February 2021. There’s a growing community supporting work on the open source software — used to decouple the application</p>
+          <div class="blog-info seperator"><span class="info-item">5 min read</span><span class="info-item">tag</span></div>
+          <router-link to="/blog/event" class="corner-botton blue">Read More</router-link>
+        </li>
+      </ul>
+    </main>
+  </div>
+</template>
+<style lang="less" scoped>
+  .blog-ctn {
+    .blog-item{
+      position: relative;
+      padding: 30px;
+      margin-bottom: 20px;
+      background: rgba(15,18,34,0.03);
+      border-radius: 8px;
+      .blog-preview{
+        text-align: justify;
+      }
+      .corner-botton{
+        position: absolute;
+        right: 30px;
+        bottom: 30px;
+      }
+    }
+  }
+</style>
\ No newline at end of file
diff --git a/src/pages/home.vue b/src/pages/home.vue
index 88bb72b..bc76c61 100644
--- a/src/pages/home.vue
+++ b/src/pages/home.vue
@@ -284,30 +284,6 @@
       &.center{
         justify-content: center;
       }
-      .corner-botton{
-        margin-right: 22px;
-        padding: 0 40px;
-        height: 46px;
-        line-height: 46px;
-        border-radius: 25px;
-        &:last-child{
-          margin-right: 0;
-        }
-        &.black{
-          color: #fff;
-          background: @enhance-color;
-          border: 1px solid  @enhance-color;
-        }
-        &.white{
-          color: @enhance-color;
-          background: #fff;
-          border: 1px solid @enhance-color;
-        }
-        &.blue{
-          color: #1A529C;
-          border: 1px solid #1A529C;
-        }
-      }
     }
   }
 </style>
diff --git a/src/router.js b/src/router.js
index b6d97a0..a803698 100644
--- a/src/router.js
+++ b/src/router.js
@@ -74,7 +74,12 @@ const routes = [{
   {
     path: '/blog',
     name: 'blog',
-    component: () => import( /* webpackChunkName: "group-blog" */ './pages/blog.vue')
+    component: () => import( /* webpackChunkName: "group-blog" */ './pages/blog/index.vue')
+  },
+  {
+    path: '/blog/event',
+    name: 'blogEvent',
+    component: () => import( /* webpackChunkName: "group-blog" */ './pages/blog/event.vue')
   },
   {
     path: '/team',
diff --git a/src/style/base.less b/src/style/base.less
index 2a815af..1db54d5 100644
--- a/src/style/base.less
+++ b/src/style/base.less
@@ -76,4 +76,52 @@ a:visited {
       }
     }
   }
-}
\ No newline at end of file
+}
+
+.blog-ctn {
+  padding-top: 60px;
+  padding-bottom: 80px;
+
+  .blog-title {
+    font-size: 24px;
+  }
+
+  .blog-info{
+    display: flex;
+    align-items: center;
+    padding: 20px 0;
+    font-size: 16px;
+    color: rgba(15,18,34,0.45);
+    .info-item{
+      padding: 0 10px 0 28px;
+      &.sperator{
+       padding: 0 10px; 
+      }
+    }
+  }
+}
+
+.corner-botton{
+  margin-right: 22px;
+  padding: 0 40px;
+  height: 46px;
+  line-height: 46px;
+  border-radius: 25px;
+  &:last-child{
+    margin-right: 0;
+  }
+  &.black{
+    color: #fff;
+    background: @enhance-color;
+    border: 1px solid  @enhance-color;
+  }
+  &.white{
+    color: @enhance-color;
+    background: #fff;
+    border: 1px solid @enhance-color;
+  }
+  &.blue{
+    color: #1A529C;
+    border: 1px solid #1A529C;
+  }
+}

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 20/50: add docs image

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 01580dd99c39f0caf8774ef6f68e8247dd7ebc30
Author: casionone <ca...@gmail.com>
AuthorDate: Tue Oct 12 15:27:30 2021 +0800

    add docs image
---
 info.txt                                           |   5 +
 .../add_an_EngineConn_flow_chart.png               | Bin
 .../EngineConn/engineconn-01.png                   | Bin
 .../Gateway/gateway_server_dispatcher.png          | Bin
 .../Gateway/gateway_server_global.png              | Bin
 .../Gateway/gatway_websocket.png                   | Bin
 .../JobSubmission}/execution.png                   | Bin
 .../JobSubmission}/orchestrate.png                 | Bin
 .../JobSubmission}/overall.png                     | Bin
 .../JobSubmission}/physical_tree.png               | Bin
 .../JobSubmission}/result_acquisition.png          | Bin
 .../JobSubmission}/submission.png                  | Bin
 .../LabelManager/label_manager_builder.png         | Bin
 .../LabelManager/label_manager_global.png          | Bin
 .../LabelManager/label_manager_scorer.png          | Bin
 .../Linkis0.X_newengine_architecture.png}          | Bin
 .../Linkis0.X_services_list.png}                   | Bin
 .../Linkis1.0_architecture.png}                    | Bin
 .../Linkis1.0_engineconn_architecture.png}         | Bin
 .../Linkis1.0_newengine_architecture.png}          | Bin
 .../Linkis1.0_newengine_initialization.png}        | Bin
 .../Linkis1.0_services_list.png}                   | Bin
 .../ContextService/linkis-contextservice-01.png    | Bin
 .../ContextService/linkis-contextservice-02.png    | Bin
 .../linkis-contextservice-cache-01.png             | Bin
 .../linkis-contextservice-cache-02.png             | Bin
 .../linkis-contextservice-cache-03.png             | Bin
 .../linkis-contextservice-cache-04.png             | Bin
 .../linkis-contextservice-cache-05.png             | Bin
 .../linkis-contextservice-client-01.png            | Bin
 .../linkis-contextservice-client-02.png            | Bin
 .../linkis-contextservice-client-03.png            | Bin
 .../ContextService/linkis-contextservice-ha-01.png | Bin
 .../ContextService/linkis-contextservice-ha-02.png | Bin
 .../ContextService/linkis-contextservice-ha-03.png | Bin
 .../ContextService/linkis-contextservice-ha-04.png | Bin
 .../linkis-contextservice-listener-01.png          | Bin
 .../linkis-contextservice-listener-02.png          | Bin
 .../linkis-contextservice-listener-03.png          | Bin
 .../linkis-contextservice-persistence-01.png       | Bin
 .../linkis-contextservice-search-01.png            | Bin
 .../linkis-contextservice-search-02.png            | Bin
 .../linkis-contextservice-search-03.png            | Bin
 .../linkis-contextservice-search-04.png            | Bin
 .../linkis-contextservice-search-05.png            | Bin
 .../linkis-contextservice-search-06.png            | Bin
 .../linkis-contextservice-search-07.png            | Bin
 .../linkis-contextservice-service-01.png           | Bin
 .../linkis-contextservice-service-02.png           | Bin
 .../linkis-contextservice-service-03.png           | Bin
 .../linkis-contextservice-service-04.png           | Bin
 .../add_an_engineConn_flow_chart.png}              | Bin
 .../bml-02.png => architecture/bml_02.png}         | Bin
 .../linkis_engineconnplugin_01.png}                | Bin
 .../linkis_intro_01.png}                           | Bin
 .../linkis_intro_02.png}                           | Bin
 .../linkis_microservice_gov_01.png}                | Bin
 .../linkis_microservice_gov_03.png}                | Bin
 .../linkis_publicservice_01.png}                   | Bin
 .../publicenhencement_architecture.png}            | Bin
 .../docs/manual/{queue-set.png => queue_set.png}   | Bin
 .../manual/{sparksql-run.png => sparksql_run.png}  | Bin
 src/docs/architecture/AddEngineConn_en.md          | 105 +++++++++++++
 src/docs/architecture/AddEngineConn_zh.md          | 111 ++++++++++++++
 .../architecture/DifferenceBetween1.0&0.x_en.md    |  50 +++++++
 .../architecture/DifferenceBetween1.0&0.x_zh.md    |  98 ++++++++++++
 src/docs/architecture/JobSubmission_en.md          | 138 +++++++++++++++++
 src/docs/architecture/JobSubmission_zh.md          | 165 +++++++++++++++++++++
 src/docs/deploy/linkis_en.md                       |   4 +-
 src/docs/deploy/linkis_zh.md                       |   4 +-
 src/docs/manual/CliManual_en.md                    |   6 +-
 src/docs/manual/HowToUse_en.md                     |  11 +-
 src/docs/manual/HowToUse_zh.md                     |   6 +-
 src/pages/docs/architecture/AddEngineConn.vue      |  13 ++
 .../docs/architecture/DifferenceBetween1.0&0.x.vue |  13 ++
 src/pages/docs/architecture/JobSubmission.vue      |  13 ++
 src/pages/docs/index.vue                           |  18 +++
 src/router.js                                      |  19 ++-
 78 files changed, 762 insertions(+), 17 deletions(-)

diff --git a/info.txt b/info.txt
new file mode 100644
index 0000000..072fede
--- /dev/null
+++ b/info.txt
@@ -0,0 +1,5 @@
+podling网站
+http://incubator.apache.org/guides/sites.html#podling_website_requirements
+
+web url要求
+https://www.apache.org/foundation/marks/pmcs#websites
diff --git a/src/assets/docs/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png b/src/assets/docs/architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png
similarity index 100%
copy from src/assets/docs/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png
copy to src/assets/docs/architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png
diff --git a/src/assets/docs/Architecture/EngineConn/engineconn-01.png b/src/assets/docs/architecture/EngineConn/engineconn-01.png
similarity index 100%
rename from src/assets/docs/Architecture/EngineConn/engineconn-01.png
rename to src/assets/docs/architecture/EngineConn/engineconn-01.png
diff --git a/src/assets/docs/Architecture/Gateway/gateway_server_dispatcher.png b/src/assets/docs/architecture/Gateway/gateway_server_dispatcher.png
similarity index 100%
rename from src/assets/docs/Architecture/Gateway/gateway_server_dispatcher.png
rename to src/assets/docs/architecture/Gateway/gateway_server_dispatcher.png
diff --git a/src/assets/docs/Architecture/Gateway/gateway_server_global.png b/src/assets/docs/architecture/Gateway/gateway_server_global.png
similarity index 100%
rename from src/assets/docs/Architecture/Gateway/gateway_server_global.png
rename to src/assets/docs/architecture/Gateway/gateway_server_global.png
diff --git a/src/assets/docs/Architecture/Gateway/gatway_websocket.png b/src/assets/docs/architecture/Gateway/gatway_websocket.png
similarity index 100%
rename from src/assets/docs/Architecture/Gateway/gatway_websocket.png
rename to src/assets/docs/architecture/Gateway/gatway_websocket.png
diff --git a/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/execution.png b/src/assets/docs/architecture/JobSubmission/execution.png
similarity index 100%
rename from src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/execution.png
rename to src/assets/docs/architecture/JobSubmission/execution.png
diff --git a/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png b/src/assets/docs/architecture/JobSubmission/orchestrate.png
similarity index 100%
rename from src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png
rename to src/assets/docs/architecture/JobSubmission/orchestrate.png
diff --git a/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/overall.png b/src/assets/docs/architecture/JobSubmission/overall.png
similarity index 100%
rename from src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/overall.png
rename to src/assets/docs/architecture/JobSubmission/overall.png
diff --git a/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png b/src/assets/docs/architecture/JobSubmission/physical_tree.png
similarity index 100%
rename from src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png
rename to src/assets/docs/architecture/JobSubmission/physical_tree.png
diff --git a/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png b/src/assets/docs/architecture/JobSubmission/result_acquisition.png
similarity index 100%
rename from src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png
rename to src/assets/docs/architecture/JobSubmission/result_acquisition.png
diff --git a/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/submission.png b/src/assets/docs/architecture/JobSubmission/submission.png
similarity index 100%
rename from src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/submission.png
rename to src/assets/docs/architecture/JobSubmission/submission.png
diff --git a/src/assets/docs/Architecture/LabelManager/label_manager_builder.png b/src/assets/docs/architecture/LabelManager/label_manager_builder.png
similarity index 100%
rename from src/assets/docs/Architecture/LabelManager/label_manager_builder.png
rename to src/assets/docs/architecture/LabelManager/label_manager_builder.png
diff --git a/src/assets/docs/Architecture/LabelManager/label_manager_global.png b/src/assets/docs/architecture/LabelManager/label_manager_global.png
similarity index 100%
rename from src/assets/docs/Architecture/LabelManager/label_manager_global.png
rename to src/assets/docs/architecture/LabelManager/label_manager_global.png
diff --git a/src/assets/docs/Architecture/LabelManager/label_manager_scorer.png b/src/assets/docs/architecture/LabelManager/label_manager_scorer.png
similarity index 100%
rename from src/assets/docs/Architecture/LabelManager/label_manager_scorer.png
rename to src/assets/docs/architecture/LabelManager/label_manager_scorer.png
diff --git a/src/assets/docs/Architecture/Linkis0.X-NewEngine-architecture.png b/src/assets/docs/architecture/Linkis0.X_newengine_architecture.png
similarity index 100%
rename from src/assets/docs/Architecture/Linkis0.X-NewEngine-architecture.png
rename to src/assets/docs/architecture/Linkis0.X_newengine_architecture.png
diff --git a/src/assets/docs/Architecture/Linkis0.X-services-list.png b/src/assets/docs/architecture/Linkis0.X_services_list.png
similarity index 100%
rename from src/assets/docs/Architecture/Linkis0.X-services-list.png
rename to src/assets/docs/architecture/Linkis0.X_services_list.png
diff --git a/src/assets/docs/Architecture/Linkis1.0-architecture.png b/src/assets/docs/architecture/Linkis1.0_architecture.png
similarity index 100%
rename from src/assets/docs/Architecture/Linkis1.0-architecture.png
rename to src/assets/docs/architecture/Linkis1.0_architecture.png
diff --git a/src/assets/docs/Architecture/Linkis1.0-EngineConn-architecture.png b/src/assets/docs/architecture/Linkis1.0_engineconn_architecture.png
similarity index 100%
rename from src/assets/docs/Architecture/Linkis1.0-EngineConn-architecture.png
rename to src/assets/docs/architecture/Linkis1.0_engineconn_architecture.png
diff --git a/src/assets/docs/Architecture/Linkis1.0-NewEngine-architecture.png b/src/assets/docs/architecture/Linkis1.0_newengine_architecture.png
similarity index 100%
rename from src/assets/docs/Architecture/Linkis1.0-NewEngine-architecture.png
rename to src/assets/docs/architecture/Linkis1.0_newengine_architecture.png
diff --git a/src/assets/docs/Architecture/Linkis1.0-newEngine-initialization.png b/src/assets/docs/architecture/Linkis1.0_newengine_initialization.png
similarity index 100%
rename from src/assets/docs/Architecture/Linkis1.0-newEngine-initialization.png
rename to src/assets/docs/architecture/Linkis1.0_newengine_initialization.png
diff --git a/src/assets/docs/Architecture/Linkis1.0-services-list.png b/src/assets/docs/architecture/Linkis1.0_services_list.png
similarity index 100%
rename from src/assets/docs/Architecture/Linkis1.0-services-list.png
rename to src/assets/docs/architecture/Linkis1.0_services_list.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png
similarity index 100%
rename from src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png
rename to src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png
diff --git a/src/assets/docs/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png b/src/assets/docs/architecture/add_an_engineConn_flow_chart.png
similarity index 100%
rename from src/assets/docs/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png
rename to src/assets/docs/architecture/add_an_engineConn_flow_chart.png
diff --git a/src/assets/docs/Architecture/bml-02.png b/src/assets/docs/architecture/bml_02.png
similarity index 100%
rename from src/assets/docs/Architecture/bml-02.png
rename to src/assets/docs/architecture/bml_02.png
diff --git a/src/assets/docs/Architecture/linkis-engineConnPlugin-01.png b/src/assets/docs/architecture/linkis_engineconnplugin_01.png
similarity index 100%
rename from src/assets/docs/Architecture/linkis-engineConnPlugin-01.png
rename to src/assets/docs/architecture/linkis_engineconnplugin_01.png
diff --git a/src/assets/docs/Architecture/linkis-intro-01.png b/src/assets/docs/architecture/linkis_intro_01.png
similarity index 100%
rename from src/assets/docs/Architecture/linkis-intro-01.png
rename to src/assets/docs/architecture/linkis_intro_01.png
diff --git a/src/assets/docs/Architecture/linkis-intro-02.png b/src/assets/docs/architecture/linkis_intro_02.png
similarity index 100%
rename from src/assets/docs/Architecture/linkis-intro-02.png
rename to src/assets/docs/architecture/linkis_intro_02.png
diff --git a/src/assets/docs/Architecture/linkis-microservice-gov-01.png b/src/assets/docs/architecture/linkis_microservice_gov_01.png
similarity index 100%
rename from src/assets/docs/Architecture/linkis-microservice-gov-01.png
rename to src/assets/docs/architecture/linkis_microservice_gov_01.png
diff --git a/src/assets/docs/Architecture/linkis-microservice-gov-03.png b/src/assets/docs/architecture/linkis_microservice_gov_03.png
similarity index 100%
rename from src/assets/docs/Architecture/linkis-microservice-gov-03.png
rename to src/assets/docs/architecture/linkis_microservice_gov_03.png
diff --git a/src/assets/docs/Architecture/linkis-publicService-01.png b/src/assets/docs/architecture/linkis_publicservice_01.png
similarity index 100%
rename from src/assets/docs/Architecture/linkis-publicService-01.png
rename to src/assets/docs/architecture/linkis_publicservice_01.png
diff --git a/src/assets/docs/Architecture/PublicEnhencementArchitecture.png b/src/assets/docs/architecture/publicenhencement_architecture.png
similarity index 100%
rename from src/assets/docs/Architecture/PublicEnhencementArchitecture.png
rename to src/assets/docs/architecture/publicenhencement_architecture.png
diff --git a/src/assets/docs/manual/queue-set.png b/src/assets/docs/manual/queue_set.png
similarity index 100%
rename from src/assets/docs/manual/queue-set.png
rename to src/assets/docs/manual/queue_set.png
diff --git a/src/assets/docs/manual/sparksql-run.png b/src/assets/docs/manual/sparksql_run.png
similarity index 100%
rename from src/assets/docs/manual/sparksql-run.png
rename to src/assets/docs/manual/sparksql_run.png
diff --git a/src/docs/architecture/AddEngineConn_en.md b/src/docs/architecture/AddEngineConn_en.md
new file mode 100644
index 0000000..5ce15fe
--- /dev/null
+++ b/src/docs/architecture/AddEngineConn_en.md
@@ -0,0 +1,105 @@
+# How to add an EngineConn
+
+Adding EngineConn is one of the core processes of the computing task preparation phase of Linkis computing governance. It mainly includes the following steps. First, client side (Entrance or user client) initiates a request for a new EngineConn to LinkisManager . Then LinkisManager initiates a request to EngineConnManager to start EngineConn based on demands and label rules. Finally,  LinkisManager returns the usable EngineConn to the client side.
+
+Based on the figure below, let's explain the whole process in detail:
+
+![Process of adding a EngineConn](../../assets/docs/architecture/add_an_engineConn_flow_chart.png)
+
+## 1. LinkisManger receives the requests from client side
+
+**Glossary:**
+
+- LinkisManager: The management center of Linkis computing governance capabilities. Its main responsibilities are:
+  1. Based on multi-level combined tags, provide users with available EngineConn after complex routing, resource management and load balancing.
+
+  2. Provide EC and ECM full life cycle management capabilities.
+
+  3. Provide users with multi-Yarn cluster resource management functions based on multi-level combined tags. It is mainly divided into three modules: AppManager, ResourceManager and LabelManager , which can support multi-active deployment and have the characteristics of high availability and easy expansion.
+
+After the AM module receives the Client’s new EngineConn request, it first checks the parameters of the request to determine the validity of the request parameters. Secondly, selects the most suitable EngineConnManager (ECM) through complex rules for use in the subsequent EngineConn startup. Next, it will apply to RM for the resources needed to start the EngineConn, Finally, it will request the ECM to create an EngineConn.
+
+The four steps will be described in detail below.
+
+### 1. Request parameter verification
+
+After the AM module receives the engine creation request, it will check the parameters. First, it will check the permissions of the requesting user and the creating user, and then check the Label attached to the request. Since in the subsequent creation process of AM, Label will be used to find ECM and perform resource information recording, etc, you need to ensure that you have the necessary Label. At this stage, you must bring the Label with UserCreatorLabel (For example: hadoop-IDE) a [...]
+
+### 2. Select  a EngineConnManager(ECM)
+
+ECM selection is mainly to complete the Label passed through the client to select a suitable ECM service to start EngineConn. In this step, first, the LabelManager will be used to search in the registered ECM through the Label passed by the client, and return in the order according to the label matching degree. After obtaining the registered ECM list, rules will be selected for these ECMs. At this stage, rules such as availability check, resource surplus, and machine load have been imple [...]
+
+### 3. Apply resources required for EngineConn
+
+1. After obtaining the assigned ECM, AM will then request how many resources will be used by the client's engine creation request by calling the EngineConnPluginServer service. Here, the resource request will be encapsulated, mainly including Label, the EngineConn startup parameters passed by the Client, and the user configuration parameters obtained from the Configuration module. The resource information is obtained by calling the ECP service through RPC.
+
+2. After the EngineConnPluginServer service receives the resource request, it will first find the corresponding engine tag through the passed tag, and select the EngineConnPlugin of the corresponding engine through the engine tag. Then use EngineConnPlugin's resource generator to calculate the engine startup parameters passed in by the client, calculate the resources required to apply for a new EngineConn this time, and then return it to LinkisManager. 
+
+   **Glossary:**
+
+- EgineConnPlugin: It is the interface that Linkis must implement when connecting a new computing storage engine. This interface mainly includes several capabilities that this EngineConn must provide during the startup process, including EngineConn resource generator, EngineConn startup command generator, EngineConn engine connection Device. Please refer to the Spark engine implementation class for the specific implementation: [SparkEngineConnPlugin](https://github.com/WeBankFinTech/Link [...]
+- EngineConnPluginServer: It is a microservice that loads all the EngineConnPlugins and provides externally the required resource generation capabilities of EngineConn and EngineConn's startup command generation capabilities.
+- EngineConnResourceFactory: Calculate the total resources needed when EngineConn starts this time through the parameters passed in.
+- EngineConnLaunchBuilder: Through the incoming parameters, a startup command of the EngineConn is generated to provide the ECM to start the engine.
+3. After AM obtains the engine resources, it will then call the RM service to apply for resources. The RM service will use the incoming Label, ECM, and the resources applied for this time to make resource judgments. First, it will judge whether the resources of the client corresponding to the Label are sufficient, and then judge whether the resources of the ECM service are sufficient, if the resources are sufficient, the resource application is approved this time, and the resources of th [...]
+
+### 4. Request ECM for engine creation
+
+1. After completing the resource application for the engine, AM will encapsulate the engine startup request, send it to the corresponding ECM via RPC for service startup, and obtain the instance object of EngineConn.
+2. AM will then determine whether EngineConn is successfully started and become available through the reported information of EngineConn. If it is, the result will be returned, and the process of adding an engine this time will end.
+
+## 2. ECM initiates EngineConn
+
+**Glossary:**
+
+- EngineConnManager: EngineConn's manager. Provides engine life-cycle management, and at the same time reports load information and its own health status to RM.
+- EngineConnBuildRequest: The start engine command passed by LinkisManager to ECM, which encapsulates all tag information, required resources and some parameter configuration information of the engine.
+- EngineConnLaunchRequest: Contains the BML materials, environment variables, ECM required local environment variables, startup commands and other information required to start an EngineConn, so that ECM can build a complete EngineConn startup script based on this.
+
+After ECM receives the EngineConnBuildRequest command passed by LinkisManager, it is mainly divided into three steps to start EngineConn: 
+
+1. Request EngineConnPluginServer to obtain EngineConnLaunchRequest encapsulated by EngineConnPluginServer. 
+2.  Parse EngineConnLaunchRequest and encapsulate it into EngineConn startup script.
+3.  Execute startup script to start EngineConn.
+
+### 2.1 EngineConnPluginServer encapsulates EngineConnLaunchRequest
+
+Get the EngineConn type and corresponding version that actually needs to be started through the label information of EngineConnBuildRequest, get the EngineConnPlugin of the EngineConn type from the memory of EngineConnPluginServer, and convert the EngineConnBuildRequest into EngineConnLaunchRequest through the EngineConnLaunchBuilder of the EngineConnPlugin.
+
+### 2.2 Encapsulate EngineConn startup script
+
+After the ECM obtains the EngineConnLaunchRequest, it downloads the BML materials in the EngineConnLaunchRequest to the local, and checks whether the local necessary environment variables required by the EngineConnLaunchRequest exist. After the verification is passed, the EngineConnLaunchRequest is encapsulated into an EngineConn startup script.
+
+### 2.3 Execute startup script
+
+Currently, ECM only supports Bash commands for Unix systems, that is, only supports Linux systems to execute the startup script.
+
+Before startup, the sudo command is used to switch to the corresponding requesting user to execute the script to ensure that the startup user (ie, JVM user) is the requesting user on the Client side.
+
+After the startup script is executed, ECM will monitor the execution status and execution log of the script in real time. Once the execution status returns to non-zero, it will immediately report EngineConn startup failure to LinkisManager and the entire process is complete; otherwise, it will keep monitoring the log and status of the startup script until The script execution is complete.
+
+## 3. EngineConn initialization
+
+After ECM executed EngineConn's startup script, EngineConn microservice was officially launched.
+
+**Glossary:**
+
+- EngineConn microservice: Refers to the actual microservices that include an EngineConn and one or more Executors to provide computing power for computing tasks. When we talk about adding an EngineConn, we actually mean adding an EngineConn microservice.
+- EngineConn: The engine connector is the actual connection unit with the underlying computing storage engine, and contains the session information with the actual engine. The difference between it and Executor is that EngineConn only acts as a connection and a client, and does not actually perform calculations. For example, SparkEngineConn, its session information is SparkSession.
+- Executor: As a real computing storage scenario executor, it is the actual computing storage logic execution unit. It abstracts the various capabilities of EngineConn and provides multiple different architectural capabilities such as interactive execution, subscription execution, and responsive execution.
+
+The initialization of EngineConn microservices is generally divided into three stages:
+
+1. Initialize the EngineConn of the specific engine. First use the command line parameters of the Java main method to encapsulate an EngineCreationContext that contains relevant label information, startup information, and parameter information, and initialize EngineConn through EngineCreationContext to complete the establishment of the connection between EngineConn and the underlying Engine, such as: SparkEngineConn will initialize one at this stage SparkSession is used to establish a co [...]
+2. Initialize the Executor. After the EngineConn is initialized, the corresponding Executor will be initialized according to the actual usage scenario to provide service capabilities for subsequent users. For example, the SparkEngineConn in the interactive computing scenario will initialize a series of Executors that can be used to submit and execute SQL, PySpark, and Scala code capabilities, and support the Client to submit and execute SQL, PySpark, Scala and other codes to the SparkEng [...]
+3. Report the heartbeat to LinkisManager regularly, and wait for EngineConn to exit. When the underlying engine corresponding to EngineConn is abnormal, or exceeds the maximum idle time, or Executor is executed, or the user manually kills, the EngineConn automatically ends and exits.
+
+----
+
+At this point, The process of how to add a new EngineConn is basically over. Finally, let's make a summary:
+
+- The client initiates a request for adding EngineConn to LinkisManager.
+- LinkisManager checks the legitimacy of the parameters, first selects the appropriate ECM according to the label, then confirms the resources required for this new EngineConn according to the user's request, applies for resources from the RM module of LinkisManager, and requires ECM to start a new EngineConn as required after the application is passed.
+- ECM first requests EngineConnPluginServer to obtain an EngineConnLaunchRequest containing BML materials, environment variables, ECM required local environment variables, startup commands and other information needed to start an EngineConn, and then encapsulates the startup script of EngineConn, and finally executes the startup script to start the EngineConn.
+- EngineConn initializes the EngineConn of a specific engine, and then initializes the corresponding Executor according to the actual usage scenario, and provides service capabilities for subsequent users. Finally, report the heartbeat to LinkisManager regularly, and wait for the normal end or termination by the user.
+
diff --git a/src/docs/architecture/AddEngineConn_zh.md b/src/docs/architecture/AddEngineConn_zh.md
new file mode 100644
index 0000000..bb6a88f
--- /dev/null
+++ b/src/docs/architecture/AddEngineConn_zh.md
@@ -0,0 +1,111 @@
+# EngineConn新增流程
+
+EngineConn的新增,是Linkis计算治理的计算任务准备阶段的核心流程之一。它主要包括了Client端(Entrance或用户客户端)向LinkisManager发起一个新增EngineConn的请求,LinkisManager为用户按需、按标签规则,向EngineConnManager发起一个启动EngineConn的请求,并等待EngineConn启动完成后,将可用的EngineConn返回给Client的整个流程。
+
+如下图所示,接下来我们来详细说明一下整个流程:
+
+![EngineConn新增流程](../../assets/docs/architecture/add_an_engineConn_flow_chart.png)
+
+## 一、LinkisManager接收客户端请求
+
+**名词解释**:
+
+- LinkisManager:是Linkis计算治理能力的管理中枢,主要的职责为:
+  1. 基于多级组合标签,为用户提供经过复杂路由、资源管控和负载均衡后的可用EngineConn;
+  
+  2. 提供EC和ECM的全生命周期管理能力;
+  
+  3. 为用户提供基于多级组合标签的多Yarn集群资源管理功能。主要分为 AppManager(应用管理器)、ResourceManager(资源管理器)、LabelManager(标签管理器)三大模块,能够支持多活部署,具备高可用、易扩展的特性。
+
+&nbsp;&nbsp;&nbsp;&nbsp;AM模块接收到Client的新增EngineConn请求后,首先会对请求做参数校验,判断请求参数的合法性;其次是通过复杂规则选中一台最合适的EngineConnManager(ECM),以用于后面的EngineConn启动;接下来会向RM申请启动该EngineConn需要的资源;最后是向ECM请求创建EngineConn。
+
+下面将对四个步骤进行详细说明。
+
+### 1. 请求参数校验
+
+&nbsp;&nbsp;&nbsp;&nbsp;AM模块在接受到引擎创建请求后首先会做参数判断,首先会做请求用户和创建用户的权限判断,接着会对请求带上的Label进行检查。因为在AM后续的创建流程当中,Label会用来查找ECM和进行资源信息记录等,所以需要保证拥有必须的Label,现阶段一定需要带上的Label有UserCreatorLabel(例:hadoop-IDE)和EngineTypeLabel(例:spark-2.4.3)。
+
+### 2. EngineConnManager(ECM)选择
+
+&nbsp;&nbsp;&nbsp;&nbsp;ECM选择主要是完成通过客户端传递过来的Label去选择一个合适的ECM服务去启动EngineConn。这一步中首先会通过LabelManager去通过客户端传递过来的Label去注册的ECM中进行查找,通过按照标签匹配度进行顺序返回。在获取到注册的ECM列表后,会对这些ECM进行规则选择,现阶段已经实现有可用性检查、资源剩余、机器负载等规则。通过规则选择后,会将标签最匹配、资源最空闲、负载低的ECM进行返回。
+
+### 3. EngineConn资源申请
+
+1. 在获取到分配的ECM后,AM接着会通过调用EngineConnPluginServer服务请求本次客户端的引擎创建请求会使用多少的资源,这里会通过封装资源请求,主要包含Label、Client传递过来的EngineConn的启动参数、以及从Configuration模块获取到用户配置参数,通过RPC调用ECP服务去获取本次的资源信息。
+
+2. EngineConnPluginServer服务在接收到资源请求后,会先通过传递过来的标签找到对应的引擎标签,通过引擎标签选择对应引擎的EngineConnPlugin。然后通过EngineConnPlugin的资源生成器,对客户端传入的引擎启动参数进行计算,算出本次申请新EngineConn所需的资源,然后返回给LinkisManager。
+   
+   **名词解释:**
+- EgineConnPlugin:是Linkis对接一个新的计算存储引擎必须要实现的接口,该接口主要包含了这种EngineConn在启动过程中必须提供的几个接口能力,包括EngineConn资源生成器、EngineConn启动命令生成器、EngineConn引擎连接器。具体的实现可以参考Spark引擎的实现类:[SparkEngineConnPlugin](https://github.com/WeBankFinTech/Linkis/blob/master/linkis-engineconn-plugins/engineconn-plugins/spark/src/main/scala/com/webank/wedatasphere/linkis/engineplugin/spark/SparkEngineConnPlugin.scala)。
+
+- EngineConnPluginServer:是加载了所有的EngineConnPlugin,对外提供EngineConn的所需资源生成能力和EngineConn的启动命令生成能力的微服务。
+
+- EngineConnPlugin资源生成器(EngineConnResourceFactory):通过传入的参数,计算出本次EngineConn启动时需要的总资源。
+
+- EngineConn启动命令生成器(EngineConnLaunchBuilder):通过传入的参数,生成该EngineConn的启动命令,以提供给ECM去启动引擎。
+3. AM在获取到引擎资源后,会接着调用RM服务去申请资源,RM服务会通过传入的Label、ECM、本次申请的资源,去进行资源判断。首先会判断客户端对应Label的资源是否足够,然后再会判断ECM服务的资源是否足够,如果资源足够,则本次资源申请通过,并对对应的Label进行资源的加减。
+
+### 4. 请求ECM创建引擎
+
+1. 在完成引擎的资源申请后,AM会封装引擎启动的请求,通过RPC发送给对应的ECM进行服务启动,并获取到EngineConn的实例对象;
+2. AM接着会去通过EngineConn的上报信息判断EngineConn是否启动成功变成可用状态,如果是就会将结果进行返回,本次新增引擎的流程也就结束。
+
+## 二、 ECM启动EngineConn
+
+名词解释:
+
+- EngineConnManager(ECM):EngineConn的管理器,提供引擎的生命周期管理,同时向RM汇报负载信息和自身的健康状况。
+
+- EngineConnBuildRequest:LinkisManager传递给ECM的启动引擎命令,里面封装了该引擎的所有标签信息、所需资源和一些参数配置信息。
+
+- EngineConnLaunchRequest:包含了启动一个EngineConn所需的BML物料、环境变量、ECM本地必需环境变量、启动命令等信息,让ECM可以依此构建出一个完整的EngineConn启动脚本。
+
+ECM接收到LinkisManager传递过来的EngineConnBuildRequest命令后,主要分为三步来启动EngineConn:1. 请求EngineConnPluginServer,获取EngineConnPluginServer封装出的EngineConnLaunchRequest;2. 解析EngineConnLaunchRequest,封装成EngineConn启动脚本;3. 执行启动脚本,启动EngineConn。
+
+### 2.1 EngineConnPluginServer封装EngineConnLaunchRequest
+
+通过EngineConnBuildRequest的标签信息,拿到实际需要启动的EngineConn类型和对应版本,从EngineConnPluginServer的内存中获取到该EngineConn类型的EngineConnPlugin,通过该EngineConnPlugin的EngineConnLaunchBuilder,将EngineConnBuildRequest转换成EngineConnLaunchRequest。
+
+### 2.2 封装EngineConn启动脚本
+
+ECM获取到EngineConnLaunchRequest之后,将EngineConnLaunchRequest中的BML物料下载到本地,并检查EngineConnLaunchRequest要求的本地必需环境变量是否存在,校验通过后,将EngineConnLaunchRequest封装成一个EngineConn启动脚本
+
+### 2.3 执行启动脚本
+
+目前ECM只对Unix系统做了Bash命令的支持,即只支持Linux系统执行该启动脚本。
+
+启动前,会通过sudo命令,切换到对应的请求用户去执行该脚本,确保启动用户(即JVM用户)为Client端的请求用户。
+
+执行该启动脚本后,ECM会实时监听脚本的执行状态和执行日志,一旦执行状态返回非0,则立马向LinkisManager汇报EngineConn启动失败,整个流程完成;否则则一直监听启动脚本的日志和状态,直到该脚本执行完成。
+
+## 三、EngineConn初始化
+
+ECM执行了EngineConn的启动脚本后,EngineConn微服务正式启动。
+
+名词解释:
+
+- EngineConn微服务:指包含了一个EngineConn、一个或多个Executor,用于对计算任务提供计算能力的实际微服务。我们说的新增一个EngineConn,其实指的就是新增一个EngineConn微服务。
+
+- EngineConn:引擎连接器,是与底层计算存储引擎的实际连接单元,包含了与实际引擎的会话信息。它与Executor的差别,是EngineConn只是起到一个连接、一个客户端的作用,并不真正的去执行计算。如SparkEngineConn,其会话信息为SparkSession。
+
+- Executor:执行器,作为真正的计算存储场景执行器,是实际的计算存储逻辑执行单元,对EngineConn各种能力的具体抽象,提供交互式执行、订阅式执行、响应式执行等多种不同的架构能力。
+
+EngineConn微服务的初始化一般分为三个阶段:
+
+1. 初始化具体引擎的EngineConn。先通过Java main方法的命令行参数,封装出一个包含了相关标签信息、启动信息和参数信息的EngineCreationContext,通过EngineCreationContext初始化EngineConn,完成EngineConn与底层Engine的连接建立,如:SparkEngineConn会在该阶段初始化一个SparkSession,用于与一个Spark application建立了连通关系。
+
+2. 初始化Executor。EngineConn初始化之后,接下来会根据实际的使用场景,初始化对应的Executor,为接下来的用户使用,提供服务能力。比如:交互式计算场景的SparkEngineConn,会初始化一系列可以用于提交执行SQL、PySpark、Scala代码能力的Executor,支持Client往该SparkEngineConn提交执行SQL、PySpark、Scala等代码。
+
+3. 定时向LinkisManager汇报心跳,并等待EngineConn结束退出。当EngineConn对应的底层引擎异常、或是超过最大空闲时间、或是Executor执行完成、或是用户手动kill时,该EngineConn自动结束退出。
+
+----
+
+到了这里,EngineConn的新增流程就基本结束了,最后我们再来总结一下EngineConn的新增流程:
+
+- 客户端向LinkisManager发起新增EngineConn的请求;
+
+- LinkisManager校验参数合法性,先是根据标签选择合适的ECM,再根据用户请求确认本次新增EngineConn所需的资源,向LinkisManager的RM模块申请资源,申请通过后要求ECM按要求启动一个新的EngineConn;
+
+- ECM先请求EngineConnPluginServer获取一个包含了启动一个EngineConn所需的BML物料、环境变量、ECM本地必需环境变量、启动命令等信息的EngineConnLaunchRequest,然后封装出EngineConn的启动脚本,最后执行启动脚本,启动该EngineConn;
+
+- EngineConn初始化具体引擎的EngineConn,然后根据实际的使用场景,初始化对应的Executor,为接下来的用户使用,提供服务能力。最后定时向LinkisManager汇报心跳,等待正常结束或被用户终止。
diff --git a/src/docs/architecture/DifferenceBetween1.0&0.x_en.md b/src/docs/architecture/DifferenceBetween1.0&0.x_en.md
new file mode 100644
index 0000000..8333cac
--- /dev/null
+++ b/src/docs/architecture/DifferenceBetween1.0&0.x_en.md
@@ -0,0 +1,50 @@
+## 1. Brief Description
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;First of all, the Entrance and EngineConnManager (formerly EngineManager) services under the Linkis1.0 architecture are completely unrelated to the engine. That is, under the Linkis1.0 architecture, each engine does not need to be implemented and started the corresponding Entrance and EngineConnManager, and Linkis1.0’s Each Entrance and EngineConnManager can be shared by all engines.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Secondly, Linkis1.0 added the Linkis-Manager service to provide external AppManager (application management), ResourceManager (resource management, the original ResourceManager service) and LabelManager (label management) capabilities.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Then, in order to reduce the difficulty of implementing and deploying a new engine, Linkis 1.0 re-architects a module called EngineConnPlugin. Each new engine only needs to implement the EngineConnPlugin interface.Linkis EngineConnPluginServer supports dynamic loading of EngineConnPlugin (new engine) in the form of a plug-in. Once EngineConnPluginServer is successfully loaded, EngineConnManager can quickly start an instance of the engine fo [...]
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Finally, all the microservices of Linkis are summarized and classified, which are generally divided into three major levels: public enhancement services, computing governance services and microservice governance services, from the code hierarchy, microservice naming and installation directory structure, etc. To standardize the microservice system of Linkis1.0.  
+##  2. Main Feature
+1. **Strengthen computing governance**, Linkis 1.0 mainly strengthens the comprehensive management and control capabilities of computing governance from engine management, label management, ECM management, and resource management. It is based on the powerful management and control design concept of labeling. This makes Linkis 1.0 a solid step towards multi-IDC, multi-cluster, and multi-container.  
+2. **Simplify user implementation of new engines**, EnginePlugin is used to integrate the related interfaces and classes that need to be implemented to implement a new engine, as well as the Entrance-EngineManager-Engine three-tier module system that needs to be split into one interface. , Simplify the process and code for users to implement the new engine, so that as long as one class is implemented, a new engine can be connected.  
+3. **Full-stack computing storage engine support**, to achieve full coverage support for computing request scenarios (such as Spark), storage request scenarios (such as HBase), and resident cluster services (such as SparkStreaming).  
+4. **Improved advanced computing strategy capability**, add Orchestrator to implement rich computing task management strategies, and support tag-based analysis and orchestration.  
+## 3. Service Comparison
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Please refer to the following two pictures:  
+![Linkis0.X Service List](../../assets/docs/architecture/Linkis0.X_services_list.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The list of Linkis1.0 microservices is as follows:  
+![Linkis1.0 Service List](../../assets/docs/architecture/Linkis1.0_services_list.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;From the above two figures, Linkis1.0 divides services into three types of services: Computing Governance (CG)/Micro Service Governance (MG)/Public Enhanced Service (PS). among them:  
+1. A major change in computing governance is that Entrance and EngineConnManager services are no longer related to engines. To implement a new engine, only the EngineConnPlugin plug-in needs to be implemented. EngineConnPluginServer will dynamically load the EngineConnPlugin plug-in to achieve engine hot-plug update;
+2. Another major change in computing governance is that LinkisManager, as the management brain of Linkis, abstracts and defines AppManager (application management), ResourceManager (resource management) and LabelManager (label management);
+3. Microservice management service, merged and unified the Eureka and Gateway services in the 0.X part, and enhanced the functions of the Gateway service to support routing and forwarding according to Label;
+4. Public enhancement services, mainly to optimize and unify the BML services/context services/data source services/public services of the 0.X part, which is convenient for everyone to manage and view.  
+## 4. Introduction To Linkis Manager
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As the management brain of Linkis, Linkis Manager is mainly composed of AppManager, ResourceManager and LabelManager.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ResourceManager not only has Linkis0.X's resource management capabilities for Yarn and Linkis EngineManager, but also provides tag-based multi-level resource allocation and recycling capabilities, allowing ResourceManager to have full resource management capabilities across clusters and across computing resource types.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;AppManager will coordinate and manage all EngineConnManager and EngineConn, and the life cycle of EngineConn application, reuse, creation, switching, and destruction will be handed over to AppManager for management.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The LabelManager will provide cross-IDC and cross-cluster EngineConn and EngineConnManager routing and management capabilities based on multi-level combined tags.  
+## 5. Introduction To Linkis EngineConnPlugin
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConnPlugin is mainly used to reduce the cost of access and deployment of new computing storage. It truly enables users to “just need to implement a class to connect to a new computing storage engine; just execute a script to quickly deploy a new engine ".  
+### 5.1 New Engine Implementation Comparison
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The following are the relevant interfaces and classes that the user Linkis0.X needs to implement to implement a new engine:  
+![Linkis0.X How to implement a brand new engine](../../assets/docs/architecture/Linkis0.X_newengine_architecture.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The following is Linkis 1.0.0, which implements a new engine, the interfaces and classes that users need to implement:  
+![Linkis1.0 How to implement a brand new engine](../../assets/docs/architecture/Linkis1.0_newengine_architecture.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Among them, EngineConnResourceFactory and EngineLaunchBuilder are not required to implement interfaces, and only EngineConnFactory is required to implement interfaces.  
+### 5.2 New engine startup process
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConnPlugin provides the Server service to start and load all engine plug-ins. The following is a new engine startup that accesses the entire process of EngineConnPlugin-Server:  
+![Linkis Engine start process](../../assets/docs/architecture/Linkis1.0_newengine_initialization.png)  
+## 6. Introduction To Linkis EngineConn
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConn, the original Engine module, is the actual unit for Linkis to connect and interact with the underlying computing storage engine, and is the basis for Linkis to provide computing and storage capabilities.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConn of Linkis1.0 is mainly composed of EngineConn and Executor. among them:  
+
+1. EngineConn is the connector, which contains the session information between the engine and the specific cluster. It only acts as a connection, a client, and does not actually perform calculations.  
+
+2. Executor is the executor. As a real computing scene executor, it is the actual computing logic execution unit, and it also abstracts various specific capabilities of the engine, such as providing various services such as locking, access status, and log acquisition.
+
+3. Executor is created by the session information in EngineConn. An engine type can support multiple different types of computing tasks, each corresponding to the implementation of an Executor, and the computing task will be submitted to the corresponding Executor for execution.  In this way, the same engine can provide different services according to different computing scenarios. For example, the permanent engine does not need to be locked after it is started, and the one-time engine d [...]
+
+4. The advantage of using the separation of Executor and EngineConn is that it can avoid the Receiver coupling business logic, and only retains the RPC communication function. Distribute services in multiple Executor modules, and abstract them into several categories of engines: interactive computing engines, streaming engines, disposable engines, etc., which may be used, and build a unified engine framework for later expansion.
+In this way, different types of engines can respectively load the required capabilities according to their needs, which greatly reduces the redundancy of engine implementation.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As shown below:  
+![Linkis EngineConn Architecture diagram](../../assets/docs/architecture/Linkis1.0_engineconn_architecture.png)
diff --git a/src/docs/architecture/DifferenceBetween1.0&0.x_zh.md b/src/docs/architecture/DifferenceBetween1.0&0.x_zh.md
new file mode 100644
index 0000000..df41d45
--- /dev/null
+++ b/src/docs/architecture/DifferenceBetween1.0&0.x_zh.md
@@ -0,0 +1,98 @@
+## 1. 简述
+
+&nbsp;&nbsp;&nbsp;&nbsp;  首先,Linkis1.0 架构下的 Entrance 和 EngineConnManager(原EngineManager)服务与 **引擎** 已完全无关,即:
+                             在 Linkis1.0 架构下,每个引擎无需再配套实现并启动对应的 Entrance 和 EngineConnManager,Linkis1.0 的每个 Entrance 和 EngineConnManager 都可以给所有引擎共用。
+                          
+&nbsp;&nbsp;&nbsp;&nbsp;  其次,Linkis1.0 新增了Linkis-Manager服务用于对外提供 AppManager(应用管理)、ResourceManager(资源管理,原ResourceManager服务)和 LabelManager(标签管理)的能力。
+
+&nbsp;&nbsp;&nbsp;&nbsp;  然后,为了降低大家实现和部署一个新引擎的难度,Linkis 1.0 重新架构了一个叫 EngineConnPlugin 的模块,每个新引擎只需要实现 EngineConnPlugin 接口即可,
+Linkis EngineConnPluginServer 支持以插件的形式动态加载 EngineConnPlugin(新引擎),一旦 EngineConnPluginServer 加载成功,EngineConnManager 便可为用户快速启动一个该引擎实例。
+                          
+&nbsp;&nbsp;&nbsp;&nbsp;  最后,对Linkis的所有微服务进行了归纳分类,总体分为了三个大层次:公共增强服务、计算治理服务和微服务治理服务,从代码层级结构、微服务命名和安装目录结构等多个方面来规范Linkis1.0的微服务体系。
+
+
+##  2. 主要特点
+
+1.  **强化计算治理**,Linkis1.0主要从引擎管理、标签管理、ECM管理和资源管理等几个方面,全面强化了计算治理的综合管控能力,基于标签化的强大管控设计理念,使得Linkis1.0向多IDC化、多集群化、多容器化,迈出了坚实的一大步。
+
+2.  **简化用户实现新引擎**,EnginePlugin用于将原本实现一个新引擎,需要实现的相关接口和类,以及需要拆分的Entrance-EngineManager-Engine三层模块体系,融合到了一个接口之中,简化用户实现新引擎的流程和代码,真正做到只要实现一个类,就能接入一个新引擎。
+
+3.  **全栈计算存储引擎支持**,实现对计算请求场景(如Spark)、存储请求场景(如HBase)和常驻集群型服务(如SparkStreaming)的全面覆盖支持。
+
+4.  **高级计算策略能力改进**,新增Orchestrator实现丰富计算任务管理策略,且支持基于标签的解析和编排。
+
+5.  **安装部署改进**  优化一键安装脚本,支持容器化部署,简化用户配置。
+
+## 3. 服务对比
+
+&nbsp;&nbsp;&nbsp;&nbsp;  请参考以下两张图:
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Linkis0.X 微服务列表如下:
+
+![Linkis0.X服务列表](../../assets/docs/architecture/Linkis0.X_services_list.png)
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Linkis1.0 微服务列表如下:
+
+![Linkis1.0服务列表](../../assets/docs/architecture/Linkis1.0_services_list.png)
+
+&nbsp;&nbsp;&nbsp;&nbsp;  从上面两个图中看,Linkis1.0 将服务分为了三类服务:计算治理(英文缩写CG)/微服务治理(MG)/公共增强服务(PS)。其中:
+
+1. 计算治理的一大变化是,Entrance 和 EngineConnManager服务与引擎再不相关,实现一个新引擎只需实现 EngineConnPlugin插件即可,EngineConnPluginServer会动态加载 EngineConnPlugin 插件,做到引擎热插拔式更新;
+
+2. 计算治理的另一大变化是,LinkisManager作为 Linkis 的管理大脑,抽象和定义了 AppManager(应用管理)、ResourceManager(资源管理)和LabelManager(标签管理);
+
+3. 微服务治理服务,将0.X部分的Eureka和Gateway服务进行了归并统一,并对Gateway服务进行了功能增强,支持按照Label进行路由转发;
+
+4. 公共增强服务,主要将0.X部分的BML服务/上下文服务/数据源服务/公共服务进行了优化和归并统一,便于大家管理和查看。
+
+## 4. Linkis Manager简介
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Linkis Manager 作为 Linkis 的管理大脑,主要由 AppManager、ResourceManager 和 LabelManager 组成。
+
+&nbsp;&nbsp;&nbsp;&nbsp;  ResourceManager 不仅具备 Linkis0.X 对 Yarn 和 Linkis EngineManager 的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让 ResourceManager 具备跨集群、跨计算资源类型的全资源管理能力;
+
+&nbsp;&nbsp;&nbsp;&nbsp;  AppManager 将统筹管理所有的 EngineConnManager 和 EngineConn,EngineConn 的申请、复用、创建、切换、销毁等生命周期全交予 AppManager进行管理;
+
+&nbsp;&nbsp;&nbsp;&nbsp;  而 LabelManager 将基于多级组合标签,提供跨IDC、跨集群的 EngineConn 和 EngineConnManager 路由和管控能力;
+
+## 5. Linkis EngineConnPlugin简介
+
+&nbsp;&nbsp;&nbsp;&nbsp;  EngineConnPlugin 主要用于降低新计算存储的接入和部署成本,真正做到让用户“只需实现一个类,就能接入一个全新计算存储引擎;只需执行一下脚本,即可快速部署一个全新引擎”。
+
+### 5.1 新引擎实现对比
+
+&nbsp;&nbsp;&nbsp;&nbsp;  以下是用户Linkis0.X实现一个新引擎需要实现的相关接口和类:
+
+![Linkis0.X 如何实现一个全新引擎](../../assets/docs/architecture/Linkis0.X_newengine_architecture.png)
+
+&nbsp;&nbsp;&nbsp;&nbsp;  以下为Linkis1.0.0,实现一个新引擎,用户需实现的接口和类:
+
+![Linkis1.0 如何实现一个全新引擎](../../assets/docs/architecture/Linkis1.0_newengine_architecture.png)
+
+&nbsp;&nbsp;&nbsp;&nbsp;  其中EngineConnResourceFactory和EngineLaunchBuilder为非必需实现接口,只有EngineConnFactory为必需实现接口。
+
+### 5.2 新引擎启动流程
+
+&nbsp;&nbsp;&nbsp;&nbsp;  EngineConnPlugin 提供了 Server 服务,用于启动和加载所有的引擎插件,以下给出了一个新引擎启动,访问了 EngineConnPlugin-Server 的全部流程:
+
+![Linkis 引擎启动流程](../../assets/docs/architecture/Linkis1.0_newengine_initialization.png)
+
+## 6. Linkis EngineConn简介
+
+&nbsp;&nbsp;&nbsp;&nbsp;  EngineConn,即原 Engine 模块,作为 Linkis 与底层计算存储引擎进行连接和交互的实际单元,是 Linkis 提供计算存储能力的基础。
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Linkis1.0 的 EngineConn 主要由 EngineConn 和 Executor构成。其中:
+
+a)	EngineConn 为连接器,包含引擎与具体集群的会话信息。它只是起到一个连接,一个客户端的作用,并不真正的去执行计算。
+
+b)	Executor 为执行器,作为真正的计算场景执行器,是实际的计算逻辑执行单元,也对引擎各种具体能力的抽象,例如提供加锁、访问状态、获取日志等多种不同的服务。
+
+c)	Executor 通过 EngineConn 中的会话信息进行创建,一个引擎类型可以支持多种不同种类的计算任务,每种对应一个 Executor 的实现,计算任务将被提交到对应的 Executor 进行执行。
+这样,同一个引擎能够根据不同的计算场景提供不同的服务。比如常驻式引擎启动后不需要加锁,一次性引擎启动后不需要支持 Receiver 和访问状态等。
+
+d)	采用 Executor 和 EngineConn 分离的方式的好处是,可以避免 Receiver 耦合业务逻辑,本身只保留 RPC 通信功能。将服务分散在多个 Executor 模块中,并且抽象成几大类引擎:交互式计算引擎、流式引擎、一次性引擎等等可能用到的,构建成统一的引擎框架,便于后期的扩充。
+这样不同类型引擎可以根据需要分别加载其中需要的能力,大大减少引擎实现的冗余。
+
+&nbsp;&nbsp;&nbsp;&nbsp;  如下图所示:
+
+![Linkis EngineConn架构图](../../assets/docs/architecture/Linkis1.0_engineconn_architecture.png)
diff --git a/src/docs/architecture/JobSubmission_en.md b/src/docs/architecture/JobSubmission_en.md
new file mode 100644
index 0000000..13c70f1
--- /dev/null
+++ b/src/docs/architecture/JobSubmission_en.md
@@ -0,0 +1,138 @@
+# Job submission, preparation and execution process
+
+The submission and execution of computing tasks (Job) is the core capability provided by Linkis. It almost colludes with all modules in the Linkis computing governance architecture and occupies a core position in Linkis.
+
+The whole process, starting at submitting user's computing tasks from the client and ending with returning final results, is divided into three stages: submission -> preparation -> executing. The details are shown in the following figure.
+
+![The overall flow chart of computing tasks](../../assets/docs/architecture/JobSubmission/overall.png)
+
+Among them:
+
+- Entrance, as the entrance to the submission stage, provides task reception, scheduling and job information forwarding capabilities. It is the unified entrance for all computing tasks. It will forward computing tasks to Orchestrator for scheduling and execution.
+- Orchestrator, as the entrance to the preparation phase, mainly provides job analysis, orchestration and execution capabilities.
+- Linkis Manager: The management center of computing governance capabilities. Its main responsibilities are as follow:
+
+  1. ResourceManager:Not only has the resource management capabilities of Yarn and Linkis EngineConnManager, but also provides tag-based multi-level resource allocation and recovery capabilities, allowing ResourceManager to have full resource management capabilities across clusters and across computing resource types;
+  2. AppManager:  Coordinate and manage all EngineConnManager and EngineConn, including the life cycle of EngineConn application, reuse, creation, switching, and destruction to AppManager for management;
+  3. LabelManager: Based on multi-level combined labels, it will provide label support for the routing and management capabilities of EngineConn and EngineConnManager across IDC and across clusters;
+  4. EngineConnPluginServer: Externally provides the resource generation capabilities required to start an EngineConn and EngineConn startup command generation capabilities.
+- EngineConnManager: It is the manager of EngineConn, which provides engine life-cycle management, and at the same time reports load information and its own health status to RM.
+- EngineConn: It is the actual connector between Linkis and the underlying computing storage engines. All user computing and storage tasks will eventually be submitted to the underlying computing storage engine by EngineConn. According to different user scenarios, EngineConn provides full-stack computing capability framework support for interactive computing, streaming computing, off-line computing, and data storage tasks.
+
+## 1. Submission Stage
+
+The submission phase is mainly the interaction of Client -> Linkis Gateway -> Entrance, and the process is as follows:
+
+![Flow chart of submission phase](../../assets/docs/architecture/JobSubmission/submission.png)
+
+1. First, the Client (such as the front end or the client) initiates a Job request, and the job request information is simplified as follows (for the specific usage of Linkis, please refer to [How to use Linkis](#/docs/manual/HowToUse)):
+```
+POST /api/rest_j/v1/entrance/submit
+```
+
+```json
+{
+    "executionContent": {"code": "show tables", "runType": "sql"},
+    "params": {"variable": {}, "configuration": {}},  //非必须
+    "source": {"scriptPath": "file:///1.hql"}, //非必须,仅用于记录代码来源
+    "labels": {
+        "engineType": "spark-2.4.3",  //指定引擎
+        "userCreator": "johnnwnag-IDE"  // 指定提交用户和提交系统
+    }
+}
+```
+
+2. After Linkis-Gateway receives the request, according to the serviceName in the URI ``/api/rest_j/v1/${serviceName}/.+``, it will confirm the microservice name for routing and forwarding. Here Linkis-Gateway will parse out the  name as entrance and  Job is forwarded to the Entrance microservice. It should be noted that if the user specifies a routing label, the Entrance microservice instance with the corresponding label will be selected for forwarding according to the routing label ins [...]
+3. After Entrance receives the Job request, it will first simply verify the legitimacy of the request, then use RPC to call JobHistory to persist the job information, and then encapsulate the Job request as a computing task, put it in the scheduling queue, and wait for it to be consumed by consumption thread.
+4. The scheduling queue will open up a consumption queue and a consumption thread for each group. The consumption queue is used to store the user computing tasks that have been preliminarily encapsulated. The consumption thread will continue to take computing tasks from the consumption queue for consumption in a FIFO manner. The current default grouping method is Creator + User (that is, submission system + user). Therefore, even if it is the same user, as long as it is a computing task  [...]
+5. After the consuming thread takes out the calculation task, it will submit the calculation task to Orchestrator, which officially enters the preparation phase.
+
+## 2. Preparation Stage
+
+There are two main processes in the preparation phase. One is to apply for an available EngineConn from LinkisManager to submit and execute the following computing tasks. The other is Orchestrator to orchestrate the computing tasks submitted by Entrance, and to convert a user's computing request into a physical execution tree and handed over to the execution phase where a computing task actually being executed. 
+
+#### 2.1 Apply to LinkisManager for available EngineConn
+
+If the user has a reusable EngineConn in LinkisManager, the EngineConn is directly locked and returned to Orchestrator, and the entire application process ends.
+
+How to define a reusable EngineConn? It refers to those that can match all the label requirements of the computing task, and the EngineConn's own health status is Healthy (the load is low and the actual status is Idle). Then, all the EngineConn that meets the conditions are sorted and selected according to the rules, and finally the best one is locked.
+
+If the user does not have a reusable EngineConn, a process to request a new EngineConn will be triggered at this time. Regarding the process, please refer to: [How to add an EngineConn](/#/docs/architecture/AddEngineConn).
+
+#### 2.2 Orchestrate a computing task
+
+Orchestrator is mainly responsible for arranging a computing task (JobReq) into a physical execution tree (PhysicalTree) that can be actually executed, and providing the execution capabilities of the Physical tree.
+
+Here we first focus on Orchestrator's computing task scheduling capabilities. A flow chart is shown below:
+
+![Orchestration flow chart](../../assets/docs/architecture/JobSubmission/orchestrate.png)
+
+The main process is as follows:
+
+- Converter: Complete the conversion of the JobReq (task request) submitted by the user to Orchestrator's ASTJob. This step will perform parameter check and information supplementation on the calculation task submitted by the user, such as variable replacement, etc.
+- Parser: Complete the analysis of ASTJob. Split ASTJob into an AST tree composed of ASTJob and ASTStage.
+- Validator: Complete the inspection and information supplement of ASTJob and ASTStage, such as code inspection, necessary Label information supplement, etc.
+- Planner: Convert an AST tree into a Logical tree. The Logical tree at this time has been composed of LogicalTask, which contains all the execution logic of the entire computing task.
+- Optimizer: Convert a Logical tree to a Physica tree and optimize the Physical tree.
+
+In a physical tree, the majority of nodes are computing strategy logic. Only the middle ExecTask truly encapsulates the execution logic which will be further submitted to and executed at EngineConn. As shown below:
+
+![Physical Tree](../../assets/docs/architecture/JobSubmission/physical_tree.png)
+
+Different computing strategies have different execution logics encapsulated by JobExecTask and StageExecTask in the Physical tree.
+
+The execution logic encapsulated by JobExecTask and StageExecTask in the Physical tree depends on the  specific type of computing strategy.
+
+For example, under the multi-active computing strategy, for a computing task submitted by a user, the execution logic submitted to EngineConn of different clusters for execution is encapsulated in two ExecTasks, and the related strategy logic is reflected in the parent node (StageExecTask(End)) of the two ExecTasks.
+
+Here, we take the multi-reading scenario under the multi-active computing strategy as an example.
+
+In multi-reading scenario, only one result of ExecTask is required to return. Once the result is returned , the Physical tree can be marked as successful. However, the Physical tree only has the ability to execute sequentially according to dependencies, and cannot terminate the execution of each node. Once a node is canceled or fails to execute, the entire Physical tree will be marked as failure. At this time, StageExecTask (End) is needed to ensure that the Physical tree can not only ca [...]
+
+The orchestration process of Linkis Orchestrator is similar to many SQL parsing engines (such as Spark, Hive's SQL parser). But in fact, the orchestration capability of Linkis Orchestrator is realized based on the computing governance field for the different computing governance needs of users. The SQL parsing engine is a parsing orchestration oriented to the SQL language. Here is a simple distinction:
+
+1. What Linkis Orchestrator mainly wants to solve is the orchestration requirements caused by different computing tasks for computing strategies. For example, in order to be multi-active, Orchestrator will submit a calculation task for the user, based on the "multi-active" computing strategy requirements, compile a physical tree, so as to submit to multiple clusters to perform this calculation task. And in the process of constructing the entire Physical tree, various possible abnormal sc [...]
+2. The orchestration ability of Linkis Orchestrator has nothing to do with the programming language. In theory, as long as an engine has adapted to Linkis, all the programming languages it supports can be orchestrated, while the SQL parsing engine only cares about the analysis and execution of SQL, and is only responsible for parsing a piece of SQL into one executable Physical tree, and finally calculate the result.
+3. Linkis Orchestrator also has the ability to parse SQL, but SQL parsing is just one of Orchestrator Parser's analytic implementations for the SQL programming language. The Parser of Linkis Orchestrator also considers introducing Apache Calcite to parse SQL. It supports splitting a user SQL that spans multiple computing engines (must be a computing engine that Linkis has docked) into multiple sub SQLs and submitting them to each corresponding engine during the execution phase. Finally,  [...]
+
+Please refer to [Orchestrator Architecture Design](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/en_US/Architecture_Documents/Orchestrator/Orchestrator_architecture_doc.md) for more details. 
+
+After the analysis and arrangement of Linkis Orchestrator, the  computing task has been transformed into a executable physical tree. Orchestrator will submit the Physical tree to Orchestrator's Execution module and enter the final execution stage.
+
+## 3. Execution Stage
+
+The execution stage is mainly divided into the following two steps, these two steps are the last two phases of capabilities provided by Linkis Orchestrator:
+
+![Flow chart of the execution stage](../../assets/docs/architecture/JobSubmission/execution.png)
+
+The main process is as follows:
+
+- Execution: Analyze the dependencies of the Physical tree, and execute them sequentially from the leaf nodes according to the dependencies.
+- Reheater: Once the execution of a node in the Physical tree is completed, it will trigger a reheat. Reheating allows the physical tree to be dynamically adjusted according to the real-time execution.For example: it is detected that a leaf node fails to execute, and it supports retry (if it is caused by throwing ReTryExecption), the Physical tree will be automatically adjusted, and a retry parent node with exactly the same content is added to the leaf node .
+
+Let us go back to the Execution stage, where we focus on the execution logic of the ExecTask node that encapsulates the user computing task submitted to EngineConn.
+
+1. As mentioned earlier, the first step in the preparation phase is to obtain a usable EngineConn from LinkisManager. After ExecTask gets this EngineConn, it will submit the user's computing task to EngineConn through an RPC request.
+2. After EngineConn receives the computing task, it will asynchronously submit it to the underlying computing storage engine through the thread pool, and then immediately return an execution ID.
+3. After ExecTask gets this execution ID, it can then use the this ID to asynchronously pull the execution status of the computing task (such as: status, progress, log, result set, etc.).
+4. At the same time, EngineConn will monitor the execution of the underlying computing storage engine in real time through multiple registered Listeners. If the computing storage engine does not support registering Listeners, EngineConn will start a daemon thread for the computing task and periodically pull the execution status from the computing storage engine.
+5. EngineConn will pull the execution status back to the microservice where Orchestrator is located in real time through RCP request.
+6. After the Receiver of the microservice receives the execution status, it will broadcast it through the ListenerBus, and the Orchestrator Execution will consume the event and dynamically update the execution status of the Physical tree.
+7. The result set generated by the calculation task will be written to storage media such as HDFS at the EngineConn side. EngineConn returns only the result set path through RPC, Execution consumes the event, and broadcasts the obtained result set path through ListenerBus, so that the Listener registered by Entrance with Orchestrator can consume the result set path and write the result set path Persist to JobHistory.
+8. After the execution of the computing task on the EngineConn side is completed, through the same logic, the Execution will be triggered to update the state of the ExecTask node of the Physical tree, so that the Physical tree will continue to execute until the entire tree is completely executed. At this time, Execution will broadcast the completion status of the calculation task through ListenerBus.
+9. After the Entrance registered Listener with the Orchestrator consumes the state event, it updates the job state to JobHistory, and the entire task execution is completed.
+
+----
+
+Finally, let's take a look at how the client side knows the state of the calculation task and obtains the calculation result in time, as shown in the following figure:
+
+![Results acquisition process](../../assets/docs/architecture/JobSubmission/result_acquisition.png)
+
+The specific process is as follows:
+
+1. The client periodically polls to request Entrance to obtain the status of the computing task.
+2. Once the status is flipped to success, it sends a request for job information to JobHistory, and gets all the result set paths.
+3. Initiate a query file content request to PublicService through the result set path, and obtain the content of the result set.
+
+Since then, the entire process of  job submission -> preparation -> execution have been completed.
+
diff --git a/src/docs/architecture/JobSubmission_zh.md b/src/docs/architecture/JobSubmission_zh.md
new file mode 100644
index 0000000..28ffaa4
--- /dev/null
+++ b/src/docs/architecture/JobSubmission_zh.md
@@ -0,0 +1,165 @@
+# JobSubmission
+
+计算任务(Job)的提交执行是Linkis提供的核心能力,它几乎串通了Linkis计算治理架构中的所有模块,在Linkis之中占据核心地位。
+
+我们将用户的计算任务从客户端提交开始,到最后的返回结果为止,整个流程分为三个阶段:提交 -> 准备 -> 执行,如下图所示:
+
+![计算任务整体流程图](../../assets/docs/architecture/JobSubmission/overall.png)
+
+其中:
+
+- Entrance作为提交阶段的入口,提供任务的接收、调度和Job信息的转发能力,是所有计算型任务的统一入口,它将把计算任务转发给Orchestrator进行编排和执行;
+
+- Orchestrator作为准备阶段的入口,主要提供了Job的解析、编排和执行能力。。
+
+- Linkis Manager:是计算治理能力的管理中枢,主要的职责为:
+  
+  1. ResourceManager:不仅具备对Yarn和Linkis EngineConnManager的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让ResourceManager具备跨集群、跨计算资源类型的全资源管理能力;
+  
+  2. AppManager:统筹管理所有的EngineConnManager和EngineConn,包括EngineConn的申请、复用、创建、切换、销毁等生命周期全交予AppManager进行管理;
+  
+  3. LabelManager:将基于多级组合标签,为跨IDC、跨集群的EngineConn和EngineConnManager路由和管控能力提供标签支持;
+  
+  4. EngineConnPluginServer:对外提供启动一个EngineConn的所需资源生成能力和EngineConn的启动命令生成能力。
+
+- EngineConnManager:是EngineConn的管理器,提供引擎的生命周期管理,同时向RM汇报负载信息和自身的健康状况。
+
+- EngineConn:是Linkis与底层计算存储引擎的实际连接器,用户所有的计算存储任务最终都会交由EngineConn提交给底层计算存储引擎。根据用户的不同使用场景,EngineConn提供了交互式计算、流式计算、离线计算、数据存储任务的全栈计算能力框架支持。
+
+接下来,我们将详细介绍计算任务从 提交 -> 准备 -> 执行 的三个阶段。
+
+## 一、提交阶段
+
+提交阶段主要是Client端 -> Linkis Gateway -> Entrance的交互,其流程如下:
+
+![提交阶段流程图](../../assets/docs/architecture/JobSubmission/submission.png)
+
+1. 首先,Client(如前端或客户端)发起Job请求,Job请求信息精简如下(关于Linkis的具体使用方式,请参考 [如何使用Linkis](/#/docs/manual/HowToUse)):
+
+```
+POST /api/rest_j/v1/entrance/submit
+```
+
+```json
+{
+    "executionContent": {"code": "show tables", "runType": "sql"},
+    "params": {"variable": {}, "configuration": {}},  //非必须
+    "source": {"scriptPath": "file:///1.hql"}, //非必须,仅用于记录代码来源
+    "labels": {
+        "engineType": "spark-2.4.3",  //指定引擎
+        "userCreator": "johnnwnag-IDE"  // 指定提交用户和提交系统
+    }
+}
+```
+
+2. Linkis-Gateway接收到请求后,根据URI ``/api/rest_j/v1/${serviceName}/.+``中的serviceName,确认路由转发的微服务名,这里Linkis-Gateway会解析出微服务名为entrance,将Job请求转发给Entrance微服务。需要说明的是:如果用户指定了路由标签,则在转发时,会根据路由标签选择打了相应标签的Entrance微服务实例进行转发,而不是随机转发。
+
+3. Entrance接收到Job请求后,会先简单校验请求的合法性,然后通过RPC调用JobHistory对Job的信息进行持久化,然后将Job请求封装为一个计算任务,放入到调度队列之中,等待被消费线程消费。
+
+4. 调度队列会为每个组开辟一个消费队列 和 一个消费线程,消费队列用于存放已经初步封装的用户计算任务,消费线程则按照FIFO的方式,不断从消费队列中取出计算任务进行消费。目前默认的分组方式为 Creator + User(即提交系统 + 用户),因此,即便是同一个用户,只要是不同的系统提交的计算任务,其实际的消费队列和消费线程都完全不同,完全隔离互不影响。(温馨提示:用户可以按需修改分组算法)
+
+5. 消费线程取出计算任务后,会将计算任务提交给Orchestrator,由此正式进入准备阶段。
+
+## 二、 准备阶段
+
+准备阶段主要有两个流程,一是向LinkisManager申请一个可用的EngineConn,用于接下来的计算任务提交执行,二是Orchestrator对Entrance提交过来的计算任务进行编排,将一个用户计算请求,通过编排转换成一个物理执行树,然后交给第三阶段的执行阶段去真正提交执行。
+
+#### 2.1 向LinkisManager申请可用EngineConn
+
+如果在LinkisManager中,该用户存在可复用的EngineConn,则直接锁定该EngineConn,并返回给Orchestrator,整个申请流程结束。
+
+如何定义可复用EngineConn?指能匹配计算任务的所有标签要求的,且EngineConn本身健康状态为Healthy(负载低且实际EngineConn状态为Idle)的,然后再按规则对所有满足条件的EngineConn进行排序选择,最终锁定一个最佳的EngineConn。
+
+如果该用户不存在可复用的EngineConn,则此时会触发EngineConn新增流程,关于EngineConn新增流程,请参数:[EngineConn新增流程](#/docs/architecture/AddEngineConn) 。
+
+#### 2.2 计算任务编排
+
+Orchestrator主要负责将一个计算任务(JobReq),编排成一棵可以真正执行的物理执行树(PhysicalTree),并提供Physical树的执行能力。
+
+这里先重点介绍Orchestrator的计算任务编排能力,如下图:
+
+![编排流程图](../../assets/docs/architecture/JobSubmission/orchestrate.png)
+
+其主要流程如下:
+
+- Converter(转换):完成对用户提交的JobReq(任务请求)转换为Orchestrator的ASTJob,该步骤会对用户提交的计算任务进行参数检查和信息补充,如变量替换等;
+
+- Parser(解析):完成对ASTJob的解析,将ASTJob拆成由ASTJob和ASTStage组成的一棵AST树。
+
+- Validator(校验): 完成对ASTJob和ASTStage的检验和信息补充,如代码检查、必须的Label信息补充等。
+
+- Planner(计划):将一棵AST树转换为一棵Logical树。此时的Logical树已经由LogicalTask组成,包含了整个计算任务的所有执行逻辑。
+
+- Optimizer(优化阶段):将一棵Logical树转换为Physica树,并对Physical树进行优化。
+
+一棵Physical树,其中的很多节点都是计算策略逻辑,只有中间的ExecTask,才真正封装了将用户计算任务提交给EngineConn进行提交执行的执行逻辑。如下图所示:
+
+![Physical树](../../assets/docs/architecture/JobSubmission/physical_tree.png)
+
+不同的计算策略,其Physical树中的JobExecTask 和 StageExecTask所封装的执行逻辑各不相同。
+
+如多活计算策略下,用户提交的一个计算任务,其提交给不同集群的EngineConn进行执行的执行逻辑封装在了两个ExecTask中,而相关的多活策略逻辑则体现在了两个ExecTask的父节点StageExecTask(End)之中。
+
+这里举多活计算策略下的多读场景。
+
+多读时,实际只要求一个ExecTask返回结果,该Physical树就可以标记为执行成功并返回结果了,但Physical树只具备按依赖关系进行依次执行的能力,无法终止某个节点的执行,且一旦某个节点被取消执行或执行失败,则整个Physical树其实会被标记为执行失败,这时就需要StageExecTask(End)来做一些特殊的处理,来保证既可以取消另一个ExecTask,又能把执行成功的ExecTask所产生的结果集继续往上传,让Physical树继续往上执行。这就是StageExecTask所代表的计算策略执行逻辑。
+
+Linkis Orchestrator的编排流程与很多SQL解析引擎(如Spark、Hive的SQL解析器)存在相似的地方,但实际上,Linkis Orchestrator是面向计算治理领域针对用户不同的计算治理需求,而实现的解析编排能力,而SQL解析引擎是面向SQL语言的解析编排。这里做一下简单区分:
+
+1. Linkis Orchestrator主要想解决的,是不同计算任务对计算策略所引发出的编排需求。如:用户想具备多活的能力,则Orchestrator会为用户提交的一个计算任务,基于“多活”的计算策略需求,编排出一棵Physical树,从而做到往多个集群去提交执行这个计算任务,并且在构建整个Physical树的过程中,已经充分考虑了各种可能存在的异常场景,并都已经体现在了Physical树中。
+
+2. Linkis Orchestrator的编排能力与编程语言无关,理论上只要是Linkis已经对接的引擎,其支持的所有编程语言都支持编排;而SQL解析引擎只关心SQL的解析和执行,只负责将一条SQL解析成一颗可执行的Physical树,最终计算出结果。
+
+3. Linkis Orchestrator也具备对SQL的解析能力,但SQL解析只是Orchestrator Parser针对SQL这种编程语言的其中一种解析实现。Linkis Orchestrator的Parser也考虑引入Apache Calcite对SQL进行解析,支持将一条跨多个计算引擎(必须是Linkis已经对接的计算引擎)的用户SQL,拆分成多条子SQL,在执行阶段时分别提交给对应的计算引擎进行执行,最后选择一个合适的计算引擎进行汇总计算。
+
+关于Orchestrator的编排详细介绍,请参考:[Orchestrator架构设计](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_architecture_doc.md)
+
+经过了Linkis Orchestrator的解析编排后,用户的计算任务已经转换成了一颗可被执行的Physical树。Orchestrator会将该Physical树提交给Orchestrator的Execution模块,进入最后的执行阶段。
+
+## 三、执行阶段
+
+执行阶段主要分为如下两步,这两步是Linkis Orchestrator提供的最后两阶段的能力:
+
+![执行阶段流程图](../../assets/docs/architecture/JobSubmission/execution.png)
+
+其主要流程如下:
+
+- Execution(执行):解析Physical树的依赖关系,按照依赖从叶子节点开始依次执行。
+
+- Reheater(再热):一旦Physical树有节点执行完成,都会触发一次再热。再热允许依照Physical树的实时执行情况,动态调整Physical树,继续进行执行。如:检测到某个叶子节点执行失败,且该叶子节点支持重试(如失败原因是抛出了ReTryExecption),则自动调整Physical树,在该叶子节点上面添加一个内容完全相同的重试父节点。
+
+我们回到Execution阶段,这里重点介绍封装了将用户计算任务提交给EngineConn的ExecTask节点的执行逻辑。
+
+1. 前面有提到,准备阶段的第一步,就是向LinkisManager获取一个可用的EngineConn,ExecTask拿到这个EngineConn后,会通过RPC请求,将用户的计算任务提交给EngineConn。
+
+2. EngineConn接收到计算任务之后,会通过线程池异步提交给底层的计算存储引擎,然后马上返回一个执行ID。
+
+3. ExecTask拿到这个执行ID后,后续可以通过该执行ID异步去拉取计算任务的执行情况(如:状态、进度、日志、结果集等)。
+
+4. 同时,EngineConn会通过注册的多个Listener,实时监听底层计算存储引擎的执行情况。如果该计算存储引擎不支持注册Listener,则EngineConn会为计算任务启动守护线程,定时向计算存储引擎拉取执行情况。
+
+5. EngineConn将拉取到的执行情况,通过RCP请求,实时传回Orchestrator所在的微服务。
+
+6. 该微服务的Receiver接收到执行情况后,会通过ListenerBus进行广播,Orchestrator的Execution消费该事件并动态更新Physical树的执行情况。
+
+7. 计算任务所产生的结果集,会在EngineConn端就写入到HDFS等存储介质之中。EngineConn通过RPC传回的只是结果集路径,Execution消费事件,并将获取到的结果集路径通过ListenerBus进行广播,使Entrance向Orchestrator注册的Listener能消费到该结果集路径,并将结果集路径写入持久化到JobHistory之中。
+
+8. EngineConn端的计算任务执行完成后,通过同样的逻辑,会触发Execution更新Physical树该ExecTask节点的状态,使得Physical树继续往上执行,直到整棵树全部执行完成。这时Execution会通过ListenerBus广播计算任务执行完成的状态。
+
+9. Entrance向Orchestrator注册的Listener消费到该状态事件后,向JobHistory更新Job的状态,整个任务执行完成。
+
+----
+
+最后,我们再来看下Client端是如何得知计算任务状态变化,并及时获取到计算结果的,具体如下图所示:
+
+![结果获取流程](../../assets/docs/architecture/JobSubmission/result_acquisition.png)
+
+具体流程如下:
+
+1. Client端定时轮询请求Entrance,获取计算任务的状态。
+
+2. 一旦发现状态翻转为成功,则向JobHistory发送获取Job信息的请求,拿到所有的结果集路径
+
+3. 通过结果集路径向PublicService发起查询文件内容的请求,获取到结果集的内容。
+
+自此,整个Job的提交 -> 准备 -> 执行 三个阶段全部完成。
diff --git a/src/docs/deploy/linkis_en.md b/src/docs/deploy/linkis_en.md
index 7cfe807..dd17e01 100644
--- a/src/docs/deploy/linkis_en.md
+++ b/src/docs/deploy/linkis_en.md
@@ -2,9 +2,9 @@
 
 ## Notes
 
-If you are new to Linkis, you can ignore this chapter, however, if you are already a Linkis user,  we recommend you reading the following article before installing or upgrading: [Brief introduction of the difference between Linkis1.0 and Linkis0.X](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/en_US/Architecture_Documents/DifferenceBetween1.0%260.x.md).
+If you are new to Linkis, you can ignore this chapter, however, if you are already a Linkis user,  we recommend you reading the following article before installing or upgrading: [Brief introduction of the difference between Linkis1.0 and Linkis0.X](#/docs/architecture/DifferenceBetween1.0&0.x).
 
-Please note: Apart from the four EngineConnPlugins included in the Linkis1.0 installation package by default: Python/Shell/Hive/Spark. You can manually install other types of engines such as JDBC depending on your own needs. For details, please refer to EngineConnPlugin installation documents.
+Please note: Apart from the four EngineConnPlugins included in the Linkis1.0 installation package by default: Python/Shell/Hive/Spark. You can manually install other types of engines such as JDBC depending on your own needs. For details, please refer to EngineConnPlugin installation documents [EngineConnPlugin installation documents](#/docs/deploy/engins).
 
 Engines that Linkis1.0 has adapted by default are listed below:
 
diff --git a/src/docs/deploy/linkis_zh.md b/src/docs/deploy/linkis_zh.md
index e1c1fb6..02e50bd 100644
--- a/src/docs/deploy/linkis_zh.md
+++ b/src/docs/deploy/linkis_zh.md
@@ -1,8 +1,8 @@
 ## 注意事项
 
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**如果您是首次接触并使用Linkis,您可以忽略该章节;如果您已经是 Linkis 的使用用户,安装或升级前建议先阅读:[Linkis1.0 与 Linkis0.X 的区别简述](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Architecture_Documents/Linkis1.0%E4%B8%8ELinkis0.X%E7%9A%84%E5%8C%BA%E5%88%AB%E7%AE%80%E8%BF%B0.md)**。
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**如果您是首次接触并使用Linkis,您可以忽略该章节;如果您已经是 Linkis 的使用用户,安装或升级前建议先阅读:[Linkis1.0 与 Linkis0.X 的区别简述](#/docs/architecture/DifferenceBetween1.0&0.x)**。
 
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;请注意:除了 Linkis1.0 安装包默认已经包含的:Python/Shell/Hive/Spark四个EngineConnPlugin以外,如果大家有需要,可以手动安装如 JDBC 引擎等类型的其他引擎,具体请参考 [EngineConnPlugin引擎插件安装文档](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Deployment_Documents/EngineConnPlugin%E5%BC%95%E6%93%8E%E6%8F%92%E4%BB%B6%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3.md)。
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;请注意:除了 Linkis1.0 安装包默认已经包含的:Python/Shell/Hive/Spark四个EngineConnPlugin以外,如果大家有需要,可以手动安装如 JDBC 引擎等类型的其他引擎,具体请参考 [EngineConnPlugin引擎插件安装文档](#/docs/deploy/engins)。
 
 **Linkis Docker镜像**  
 [Linkis 0.10.0 Docker](https://hub.docker.com/repository/docker/wedatasphere/linkis)
diff --git a/src/docs/manual/CliManual_en.md b/src/docs/manual/CliManual_en.md
index 0aa70c7..e6523ce 100644
--- a/src/docs/manual/CliManual_en.md
+++ b/src/docs/manual/CliManual_en.md
@@ -40,7 +40,7 @@ Linkis-cli currently only supports synchronous submission, that is, after submit
 * cli parameters
 
     | Parameter | Description | Data Type | Is Required |
-    | ----------- | -------------------------- | -------- |- --- |
+    | ----------- | -------------------------- | -------- |---- |
     | --gwUrl | Manually specify the linkis gateway address | String | No |
     | --authStg | Specify authentication policy | String | No |
     | --authKey | Specify authentication key | String | No |
@@ -50,7 +50,9 @@ Linkis-cli currently only supports synchronous submission, that is, after submit
 * Parameters
 
     | Parameter | Description | Data Type | Is Required |
-    | ----------- | -------------------------- | -------- |- --- |
+    | ----------- | -------------------------- | -------- |---- |
+    | Parameter      | Description                     | Data Type  | Is Required  |
+    | ----------- | -------------------------- | -------- | ---- |
     | -engType | Engine Type | String | Yes |
     | -runType | Execution Type | String | Yes |
     | -code | Execution code | String | No |
diff --git a/src/docs/manual/HowToUse_en.md b/src/docs/manual/HowToUse_en.md
index f450297..506c533 100644
--- a/src/docs/manual/HowToUse_en.md
+++ b/src/docs/manual/HowToUse_en.md
@@ -5,10 +5,9 @@
 ## 1. Client side usage
 
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;If you need to connect to other applications on the basis of Linkis, you need to develop the interface provided by Linkis. Linkis provides a variety of client access interfaces. For detailed usage introduction, please refer to the following:
--[**Restful API Usage**](./../API_Documentations/Linkis task submission and execution RestAPI document.md)
--[**JDBC API Usage**](./../API_Documentations/Task Submit and Execute JDBC_API Document.md)
--[**How ​​to use Java SDK**](./../User_Manual/Linkis1.0 user use document.md)
-
+- [**Restful API Usage**](./../API_Documentations/Linkis任务提交执行RestAPI文档.md)
+- [**JDBC API Usage**](./../API_Documentations/任务提交执行JDBC_API文档.md)
+- [**How ​​to use Java SDK**](#/docs/manual/UserManual)
 ## 2. Scriptis uses Linkis
 
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;If you need to use Linkis to complete interactive online analysis and processing, and you do not need data analysis application tools such as workflow development, workflow scheduling, data services, etc., you can Install [**Scriptis**](https://github.com/WeBankFinTech/Scriptis) separately. For detailed installation tutorial, please refer to its corresponding installation and deployment documents.
@@ -16,12 +15,12 @@
 ## 2.1. Use Scriptis to execute scripts
 
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Currently Scriptis supports submitting a variety of task types to Linkis, including Spark SQL, Hive SQL, Scala, PythonSpark, etc. In order to meet the needs of data analysis, the left side of Scriptis, Provides viewing user workspace information, user database and table information, user-defined functions, and HDFS directories. It also supports uploading and downloading, result set exporting and other functions. Scriptis is very simple to u [...]
-![Scriptis uses Linkis](../../assets/docs/manual/sparksql-run.png)
+![Scriptis uses Linkis](../../assets/docs/manual/sparksql_run.png)
 
 ## 2.2. Scriptis Management Console
 
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis provides an interface for resource configuration and management. If you want to configure and manage task resources, you can set it on the Scriptis management console interface, including queue settings and resource configuration , The number of engine instances, etc. Through the management console, you can easily configure the resources for submitting tasks to Linkis, making it more convenient and faster.
-![Scriptis uses Linkis](../../assets/docs/manual/queue-set.png)
+![Scriptis uses Linkis](../../assets/docs/manual/queue_set.png)
 
 ## 3. DataSphere Studio uses Linkis
 
diff --git a/src/docs/manual/HowToUse_zh.md b/src/docs/manual/HowToUse_zh.md
index 9bbc435..f1b233b 100644
--- a/src/docs/manual/HowToUse_zh.md
+++ b/src/docs/manual/HowToUse_zh.md
@@ -5,15 +5,15 @@
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;如果需要在Linkis的基础上,接入其它应用,需要针对Linkis提供的接口进行开发,Linkis提供了多种客户端接入接口,更详细的使用介绍可以参考以下内容:  
 - [**Restful API使用方式**](./../API_Documentations/Linkis任务提交执行RestAPI文档.md)
 - [**JDBC API使用方式**](./../API_Documentations/任务提交执行JDBC_API文档.md)
-- [**Java SDK使用方式**](./../User_Manual/Linkis1.0用户使用文档.md)
+- [**Java SDK使用方式**](#/docs/manual/UserManual)
 ## 2. Scriptis使用Linkis
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;如果需要使用Linkis完成交互式在线分析处理的工作,并且不需要诸如工作流开发、工作流调度、数据服务等数据分析应用工具,可以单独安装[**Scriptis**](https://github.com/WeBankFinTech/Scriptis),详细安装教程可参考其对应的安装部署文档。  
 ## 2.1. 使用Scriptis执行脚本
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;目前Scriptis支持向Linkis提交多种任务类型,包括Spark SQL、Hive SQL、Scala、PythonSpark等,为了满足数据分析的需求,Scriptis左侧,提供查看用户工作空间信息、用户数据库和表信息、用户自定义函数,以及HDFS目录,同时支持上传下载,结果集导出等功能。Scriptis使用Linkis十分简单,可以很方便的在编辑栏书写脚本,提交到Linkis运行。  
-![Scriptis使用Linkis](../../assets/docs/manual/sparksql-run.png)
+![Scriptis使用Linkis](../../assets/docs/manual/sparksql_run.png)
 ## 2.2. Scriptis管理台
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis提供资源配置和管理的接口,如果希望对任务资源进行配置管理,可以在Scriptis的管理台界面进行设置,包括队列设置、资源配置、引擎实例个数等。通过管理台,可以很方便的配置向Linkis提交任务的资源,使得更加方便快捷。  
-![Scriptis使用Linkis](../../assets/docs/manual/queue-set.png)
+![Scriptis使用Linkis](../../assets/docs/manual/queue_set.png)
 
 ## 3. DataSphere Studio使用Linkis
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[**DataSphere Studio**](https://github.com/WeBankFinTech/DataSphereStudio)简称DSS,是微众银行大数据平台开源的一站式数据分析处理平台,DSS交互式分析模块集成了Scriptis,使用DSS进行交互式分析和Scriptis一样,除了提供Scriptis的基本功能外,DSS提供和集成了更加丰富和强大的数据分析功能,包括用于数据提取的数据服务、开发报表的工作流、可视化分析软件Visualis等。由于原生的支持,目前DSS是与Linkis集成度最高的软件,如果希望使用完整的Linkis功能,建议使用DSS搭配Linkis一起使用。  
diff --git a/src/pages/docs/architecture/AddEngineConn.vue b/src/pages/docs/architecture/AddEngineConn.vue
new file mode 100644
index 0000000..e78fb7d
--- /dev/null
+++ b/src/pages/docs/architecture/AddEngineConn.vue
@@ -0,0 +1,13 @@
+<template>
+  <docEn v-if="lang === 'en'"></docEn>
+  <docZh v-else></docZh>
+</template>
+<script setup>
+  import { ref } from "vue";
+
+  import docEn from '../../../docs/architecture/AddEngineConn_en.md';
+  import docZh from '../../../docs/architecture/AddEngineConn_zh.md';
+
+  // 初始化语言
+  const lang = ref(localStorage.getItem('locale') || 'en');
+</script>
diff --git a/src/pages/docs/architecture/DifferenceBetween1.0&0.x.vue b/src/pages/docs/architecture/DifferenceBetween1.0&0.x.vue
new file mode 100644
index 0000000..8883613
--- /dev/null
+++ b/src/pages/docs/architecture/DifferenceBetween1.0&0.x.vue
@@ -0,0 +1,13 @@
+<template>
+  <docEn v-if="lang === 'en'"></docEn>
+  <docZh v-else></docZh>
+</template>
+<script setup>
+  import { ref } from "vue";
+
+  import docEn from '../../../docs/architecture/DifferenceBetween1.0&0.x_en.md';
+  import docZh from '../../../docs/architecture/DifferenceBetween1.0&0.x_zh.md';
+
+  // 初始化语言
+  const lang = ref(localStorage.getItem('locale') || 'en');
+</script>
diff --git a/src/pages/docs/architecture/JobSubmission.vue b/src/pages/docs/architecture/JobSubmission.vue
new file mode 100644
index 0000000..f12c3a1
--- /dev/null
+++ b/src/pages/docs/architecture/JobSubmission.vue
@@ -0,0 +1,13 @@
+<template>
+  <docEn v-if="lang === 'en'"></docEn>
+  <docZh v-else></docZh>
+</template>
+<script setup>
+  import { ref } from "vue";
+
+  import docEn from '../../../docs/architecture/JobSubmission_en.md';
+  import docZh from '../../../docs/architecture/JobSubmission_zh.md';
+
+  // 初始化语言
+  const lang = ref(localStorage.getItem('locale') || 'en');
+</script>
diff --git a/src/pages/docs/index.vue b/src/pages/docs/index.vue
index fa3d5f0..26b759a 100644
--- a/src/pages/docs/index.vue
+++ b/src/pages/docs/index.vue
@@ -85,6 +85,24 @@
                 }]
 
 
+        },
+        {
+            title: '架构文档',
+            link: '/docs/architecture/DifferenceBetween1.0&0.x',
+            children: [
+                {
+                    title: 'Linkis1.0与Linkis0.X的区别简述',
+                    link: '/docs/architecture/DifferenceBetween1.0&0.x',
+                },
+                {
+                    title: 'Job提交准备执行流程',
+                    link: '/docs/architecture/JobSubmission',
+                }, {
+                    title: 'EngineConn新增流程',
+                    link: '/docs/architecture/AddEngineConn',
+                }]
+
+
         }
     ]
 </script>
diff --git a/src/router.js b/src/router.js
index bf088d4..b6d97a0 100644
--- a/src/router.js
+++ b/src/router.js
@@ -34,7 +34,7 @@ const routes = [{
       component: () => import( /* webpackChunkName: "group-doc_UserManual" */ './pages/docs/manual/UserManual.vue')
     },{
       path: 'manual/HowToUse',
-      name: 'manual/HowToUse',
+      name: 'manualHowToUse',
       component: () => import( /* webpackChunkName: "group-doc_HowToUse" */ './pages/docs/manual/HowToUse.vue')
     },{
       path: 'manual/ConsoleUserManual',
@@ -44,7 +44,22 @@ const routes = [{
         path: 'manual/CliManual',
         name: 'manualCliManual',
         component: () => import( /* webpackChunkName: "group-doc_CliManual" */ './pages/docs/manual/CliManual.vue')
-      }]
+      },
+
+      {
+        path: 'architecture/JobSubmission',
+        name: 'architectureJobSubmission',
+        component: () => import( /* webpackChunkName: "group-doc_JobSubmission" */ './pages/docs/architecture/JobSubmission.vue')
+      },{
+        path: 'architecture/AddEngineConn',
+        name: 'architectureAddEngineConn',
+        component: () => import( /* webpackChunkName: "group-doc_AddEngineConn" */ './pages/docs/architecture/AddEngineConn.vue')
+      },{
+        path: 'architecture/DifferenceBetween1.0&0.x',
+        name: 'architectureDifferenceBetween1.0&0.x',
+        component: () => import( /* webpackChunkName: "group-doc_DifferenceBetween1.0&0.x" */ './pages/docs/architecture/DifferenceBetween1.0&0.x.vue')
+      }
+    ]
   },
   {
     path: '/faq/index',

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 42/50: add asf.yaml file and LICENSE file

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit c73b1691e33b734ed4c3ded48bab97bce2544dd6
Author: casionone <ca...@gmail.com>
AuthorDate: Thu Oct 21 17:46:18 2021 +0800

    add asf.yaml file and LICENSE file
---
 .asf.yaml |  28 +++++++++
 LICENSE   | 201 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 229 insertions(+)

diff --git a/.asf.yaml b/.asf.yaml
new file mode 100644
index 0000000..9301abb
--- /dev/null
+++ b/.asf.yaml
@@ -0,0 +1,28 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+github:
+  description: Apache Linkis documents
+  homepage: https://linkis.staged.apache.org/
+  labels:
+    - linkis
+    - website
+
+# If this branch is asf-staging, it will be published to https://linkis.staged.apache.org/
+staging:
+  profile: ~
+  whoami:  asf-staging
\ No newline at end of file
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..261eeb9
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,201 @@
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 32/50: user case img

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 19f5506ef93960d2ecacca25a10a2de98161ab20
Author: casionone <ca...@gmail.com>
AuthorDate: Mon Oct 18 14:14:43 2021 +0800

    user case img
---
 src/pages/home/img.js | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/src/pages/home/img.js b/src/pages/home/img.js
index 60197d3..469a340 100644
--- a/src/pages/home/img.js
+++ b/src/pages/home/img.js
@@ -1,14 +1,14 @@
 const  img=[
-    {"url":"邮政银行.jpg"},
-    {"url":"中国民生银行.jpg"},
-    {"url":"美团点评.jpg"},
+    // {"url":"邮政银行.jpg"},
+    // {"url":"中国民生银行.jpg"},
+    // {"url":"美团点评.jpg"},
     {"url":"中国电信.png"},
-    {"url":"交通银行.jpg"},
-    {"url":"招商银行.jpg"},
-    {"url":"招联消费金融有限公司.png"},
+    // {"url":"交通银行.jpg"},
+    // {"url":"招商银行.jpg"},
+    // {"url":"招联消费金融有限公司.png"},
     {"url":"平安.png"},
     // {"url":"平安医保科技.png"},
-    {"url":"360.png"},
+    // {"url":"360.png"},
     {"url":"海康威视.png"},
     {"url":"理想汽车.png"},
     {"url":"百信银行.jpg"},

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 19/50: add docs image

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 7ce1935783ff3b3a11391f330e50db6ba3f58371
Author: casionone <ca...@gmail.com>
AuthorDate: Tue Oct 12 13:15:53 2021 +0800

    add docs image
---
 .../add_an_EngineConn_flow_chart.png               | Bin 0 -> 59893 bytes
 .../docs/Architecture/EngineConn/engineconn-01.png | Bin 0 -> 157753 bytes
 .../Gateway/gateway_server_dispatcher.png          | Bin 0 -> 83743 bytes
 .../Architecture/Gateway/gateway_server_global.png | Bin 0 -> 85272 bytes
 .../docs/Architecture/Gateway/gatway_websocket.png | Bin 0 -> 37769 bytes
 .../execution.png                                  | Bin 0 -> 31078 bytes
 .../orchestrate.png                                | Bin 0 -> 31095 bytes
 .../overall.png                                    | Bin 0 -> 231192 bytes
 .../physical_tree.png                              | Bin 0 -> 79471 bytes
 .../result_acquisition.png                         | Bin 0 -> 41007 bytes
 .../submission.png                                 | Bin 0 -> 12946 bytes
 .../LabelManager/label_manager_builder.png         | Bin 0 -> 62978 bytes
 .../LabelManager/label_manager_global.png          | Bin 0 -> 14988 bytes
 .../LabelManager/label_manager_scorer.png          | Bin 0 -> 72977 bytes
 .../Linkis0.X-NewEngine-architecture.png           | Bin 0 -> 244826 bytes
 .../docs/Architecture/Linkis0.X-services-list.png  | Bin 0 -> 66821 bytes
 .../Linkis1.0-EngineConn-architecture.png          | Bin 0 -> 157753 bytes
 .../Linkis1.0-NewEngine-architecture.png           | Bin 0 -> 26523 bytes
 .../docs/Architecture/Linkis1.0-architecture.png   | Bin 0 -> 212362 bytes
 .../Linkis1.0-newEngine-initialization.png         | Bin 0 -> 48313 bytes
 .../docs/Architecture/Linkis1.0-services-list.png  | Bin 0 -> 85890 bytes
 .../Architecture/PublicEnhencementArchitecture.png | Bin 0 -> 47158 bytes
 .../ContextService/linkis-contextservice-01.png    | Bin 0 -> 22692 bytes
 .../ContextService/linkis-contextservice-02.png    | Bin 0 -> 10655 bytes
 .../linkis-contextservice-cache-01.png             | Bin 0 -> 11881 bytes
 .../linkis-contextservice-cache-02.png             | Bin 0 -> 23902 bytes
 .../linkis-contextservice-cache-03.png             | Bin 0 -> 109334 bytes
 .../linkis-contextservice-cache-04.png             | Bin 0 -> 36161 bytes
 .../linkis-contextservice-cache-05.png             | Bin 0 -> 2265 bytes
 .../linkis-contextservice-client-01.png            | Bin 0 -> 54438 bytes
 .../linkis-contextservice-client-02.png            | Bin 0 -> 93036 bytes
 .../linkis-contextservice-client-03.png            | Bin 0 -> 34839 bytes
 .../ContextService/linkis-contextservice-ha-01.png | Bin 0 -> 38439 bytes
 .../ContextService/linkis-contextservice-ha-02.png | Bin 0 -> 21982 bytes
 .../ContextService/linkis-contextservice-ha-03.png | Bin 0 -> 91788 bytes
 .../ContextService/linkis-contextservice-ha-04.png | Bin 0 -> 40733 bytes
 .../linkis-contextservice-listener-01.png          | Bin 0 -> 24414 bytes
 .../linkis-contextservice-listener-02.png          | Bin 0 -> 46152 bytes
 .../linkis-contextservice-listener-03.png          | Bin 0 -> 32597 bytes
 .../linkis-contextservice-persistence-01.png       | Bin 0 -> 198797 bytes
 .../linkis-contextservice-search-01.png            | Bin 0 -> 33731 bytes
 .../linkis-contextservice-search-02.png            | Bin 0 -> 26768 bytes
 .../linkis-contextservice-search-03.png            | Bin 0 -> 33312 bytes
 .../linkis-contextservice-search-04.png            | Bin 0 -> 25192 bytes
 .../linkis-contextservice-search-05.png            | Bin 0 -> 24757 bytes
 .../linkis-contextservice-search-06.png            | Bin 0 -> 29923 bytes
 .../linkis-contextservice-search-07.png            | Bin 0 -> 30013 bytes
 .../linkis-contextservice-service-01.png           | Bin 0 -> 56235 bytes
 .../linkis-contextservice-service-02.png           | Bin 0 -> 73463 bytes
 .../linkis-contextservice-service-03.png           | Bin 0 -> 23477 bytes
 .../linkis-contextservice-service-04.png           | Bin 0 -> 27387 bytes
 src/assets/docs/Architecture/bml-02.png            | Bin 0 -> 55227 bytes
 .../Architecture/linkis-engineConnPlugin-01.png    | Bin 0 -> 21864 bytes
 src/assets/docs/Architecture/linkis-intro-01.png   | Bin 0 -> 413878 bytes
 src/assets/docs/Architecture/linkis-intro-02.png   | Bin 0 -> 355186 bytes
 .../Architecture/linkis-microservice-gov-01.png    | Bin 0 -> 109909 bytes
 .../Architecture/linkis-microservice-gov-03.png    | Bin 0 -> 83457 bytes
 .../docs/Architecture/linkis-publicService-01.png  | Bin 0 -> 62443 bytes
 src/assets/docs/EngineUsage/hive-config.png        | Bin 0 -> 86864 bytes
 src/assets/docs/EngineUsage/hive-run.png           | Bin 0 -> 94294 bytes
 src/assets/docs/EngineUsage/jdbc-conf.png          | Bin 0 -> 91609 bytes
 src/assets/docs/EngineUsage/jdbc-run.png           | Bin 0 -> 56438 bytes
 src/assets/docs/EngineUsage/pyspakr-run.png        | Bin 0 -> 124979 bytes
 src/assets/docs/EngineUsage/python-config.png      | Bin 0 -> 92997 bytes
 src/assets/docs/EngineUsage/python-run.png         | Bin 0 -> 89641 bytes
 src/assets/docs/EngineUsage/queue-set.png          | Bin 0 -> 93935 bytes
 src/assets/docs/EngineUsage/scala-run.png          | Bin 0 -> 125060 bytes
 src/assets/docs/EngineUsage/shell-run.png          | Bin 0 -> 209553 bytes
 src/assets/docs/EngineUsage/spark-conf.png         | Bin 0 -> 99930 bytes
 src/assets/docs/EngineUsage/sparksql-run.png       | Bin 0 -> 121699 bytes
 src/assets/docs/EngineUsage/workflow.png           | Bin 0 -> 151481 bytes
 src/assets/docs/Linkis_1.0_architecture.png        | Bin 0 -> 316746 bytes
 src/assets/docs/Tuning_and_Troubleshooting/Q&A.png | Bin 0 -> 161638 bytes
 .../Tuning_and_Troubleshooting/code-fix-01.png     | Bin 0 -> 199523 bytes
 .../Tuning_and_Troubleshooting/db-config-01.png    | Bin 0 -> 391789 bytes
 .../Tuning_and_Troubleshooting/db-config-02.png    | Bin 0 -> 60334 bytes
 .../docs/Tuning_and_Troubleshooting/debug-01.png   | Bin 0 -> 6168 bytes
 .../docs/Tuning_and_Troubleshooting/debug-02.png   | Bin 0 -> 62496 bytes
 .../docs/Tuning_and_Troubleshooting/debug-03.png   | Bin 0 -> 32875 bytes
 .../docs/Tuning_and_Troubleshooting/debug-04.png   | Bin 0 -> 111758 bytes
 .../docs/Tuning_and_Troubleshooting/debug-05.png   | Bin 0 -> 52040 bytes
 .../docs/Tuning_and_Troubleshooting/debug-06.png   | Bin 0 -> 63668 bytes
 .../docs/Tuning_and_Troubleshooting/debug-07.png   | Bin 0 -> 316176 bytes
 .../docs/Tuning_and_Troubleshooting/debug-08.png   | Bin 0 -> 27722 bytes
 .../Tuning_and_Troubleshooting/hive-config-01.png  | Bin 0 -> 76327 bytes
 .../linkis-exception-01.png                        | Bin 0 -> 1199628 bytes
 .../linkis-exception-02.png                        | Bin 0 -> 1366293 bytes
 .../linkis-exception-03.png                        | Bin 0 -> 646836 bytes
 .../linkis-exception-04.png                        | Bin 0 -> 2965676 bytes
 .../linkis-exception-05.png                        | Bin 0 -> 454949 bytes
 .../linkis-exception-06.png                        | Bin 0 -> 869492 bytes
 .../linkis-exception-07.png                        | Bin 0 -> 2249882 bytes
 .../linkis-exception-08.png                        | Bin 0 -> 1191728 bytes
 .../linkis-exception-09.png                        | Bin 0 -> 1008341 bytes
 .../linkis-exception-10.png                        | Bin 0 -> 322110 bytes
 .../Tuning_and_Troubleshooting/page-show-01.png    | Bin 0 -> 115010 bytes
 .../Tuning_and_Troubleshooting/page-show-02.png    | Bin 0 -> 576911 bytes
 .../Tuning_and_Troubleshooting/page-show-03.png    | Bin 0 -> 654609 bytes
 .../searching_keywords.png                         | Bin 0 -> 102094 bytes
 .../Tuning_and_Troubleshooting/shell-error-01.png  | Bin 0 -> 74682 bytes
 .../Tuning_and_Troubleshooting/shell-error-02.png  | Bin 0 -> 330735 bytes
 .../Tuning_and_Troubleshooting/shell-error-03.png  | Bin 0 -> 1624375 bytes
 .../Tuning_and_Troubleshooting/shell-error-04.png  | Bin 0 -> 803920 bytes
 .../Tuning_and_Troubleshooting/shell-error-05.png  | Bin 0 -> 179543 bytes
 .../docs/Tunning_And_Troubleshooting/debug-01.png  | Bin 0 -> 6168 bytes
 .../docs/Tunning_And_Troubleshooting/debug-02.png  | Bin 0 -> 62496 bytes
 .../docs/Tunning_And_Troubleshooting/debug-03.png  | Bin 0 -> 32875 bytes
 .../docs/Tunning_And_Troubleshooting/debug-04.png  | Bin 0 -> 111758 bytes
 .../docs/Tunning_And_Troubleshooting/debug-05.png  | Bin 0 -> 52040 bytes
 .../docs/Tunning_And_Troubleshooting/debug-06.png  | Bin 0 -> 63668 bytes
 .../docs/Tunning_And_Troubleshooting/debug-07.png  | Bin 0 -> 316176 bytes
 .../docs/Tunning_And_Troubleshooting/debug-08.png  | Bin 0 -> 27722 bytes
 src/assets/docs/deploy/distributed_deployment.png  | Bin 0 -> 130148 bytes
 .../docs/deployment/Linkis1.0_combined_eureka.png  | Bin 0 -> 134418 bytes
 .../docs/manual/ECM_all_engine_information.png     | Bin 0 -> 89529 bytes
 src/assets/docs/manual/ECM_editing_interface.png   | Bin 0 -> 64470 bytes
 .../docs/manual/ECM_management_interface.png       | Bin 0 -> 43765 bytes
 src/assets/docs/manual/administrator_view.png      | Bin 0 -> 80087 bytes
 ...he_instance_name_to_view_engine_information.png | Bin 0 -> 41814 bytes
 src/assets/docs/manual/edit_directory.png          | Bin 0 -> 89919 bytes
 .../docs/manual/eureka_registration_center.png     | Bin 0 -> 327966 bytes
 .../docs/manual/global_history_interface.png       | Bin 0 -> 82340 bytes
 .../docs/manual/global_history_query_button.png    | Bin 0 -> 81788 bytes
 .../docs/manual/global_variable_interface.png      | Bin 0 -> 40073 bytes
 .../manual/microservice_management_interface.png   | Bin 0 -> 39198 bytes
 src/assets/docs/manual/new_application_type.png    | Bin 0 -> 108864 bytes
 .../manual/parameter_configuration_interface.png   | Bin 0 -> 79698 bytes
 src/assets/docs/manual/queue-set.png               | Bin 0 -> 93935 bytes
 .../docs/manual/resource_management_interface.png  | Bin 0 -> 49277 bytes
 src/assets/docs/manual/sparksql-run.png            | Bin 0 -> 121699 bytes
 .../manual/task_execution_log_of_a_single_task.png | Bin 0 -> 114314 bytes
 src/assets/docs/manual/workflow.png                | Bin 0 -> 151481 bytes
 src/assets/docs/wedatasphere_contact_01.png        | Bin 0 -> 217762 bytes
 src/assets/docs/wedatasphere_stack_Linkis.png      | Bin 0 -> 203466 bytes
 src/assets/fqa/Q&A.png                             | Bin 0 -> 161638 bytes
 src/assets/fqa/code-fix-01.png                     | Bin 0 -> 199523 bytes
 src/assets/fqa/db-config-01.png                    | Bin 0 -> 391789 bytes
 src/assets/fqa/db-config-02.png                    | Bin 0 -> 60334 bytes
 src/assets/fqa/debug-01.png                        | Bin 0 -> 6168 bytes
 src/assets/fqa/debug-02.png                        | Bin 0 -> 62496 bytes
 src/assets/fqa/debug-03.png                        | Bin 0 -> 32875 bytes
 src/assets/fqa/debug-04.png                        | Bin 0 -> 111758 bytes
 src/assets/fqa/debug-05.png                        | Bin 0 -> 52040 bytes
 src/assets/fqa/debug-06.png                        | Bin 0 -> 63668 bytes
 src/assets/fqa/debug-07.png                        | Bin 0 -> 316176 bytes
 src/assets/fqa/debug-08.png                        | Bin 0 -> 27722 bytes
 src/assets/fqa/hive-config-01.png                  | Bin 0 -> 76327 bytes
 src/assets/fqa/linkis-exception-01.png             | Bin 0 -> 1199628 bytes
 src/assets/fqa/linkis-exception-02.png             | Bin 0 -> 1366293 bytes
 src/assets/fqa/linkis-exception-03.png             | Bin 0 -> 646836 bytes
 src/assets/fqa/linkis-exception-04.png             | Bin 0 -> 2965676 bytes
 src/assets/fqa/linkis-exception-05.png             | Bin 0 -> 454949 bytes
 src/assets/fqa/linkis-exception-06.png             | Bin 0 -> 869492 bytes
 src/assets/fqa/linkis-exception-07.png             | Bin 0 -> 2249882 bytes
 src/assets/fqa/linkis-exception-08.png             | Bin 0 -> 1191728 bytes
 src/assets/fqa/linkis-exception-09.png             | Bin 0 -> 1008341 bytes
 src/assets/fqa/linkis-exception-10.png             | Bin 0 -> 322110 bytes
 src/assets/fqa/page-show-01.png                    | Bin 0 -> 115010 bytes
 src/assets/fqa/page-show-02.png                    | Bin 0 -> 576911 bytes
 src/assets/fqa/page-show-03.png                    | Bin 0 -> 654609 bytes
 src/assets/fqa/searching_keywords.png              | Bin 0 -> 102094 bytes
 src/assets/fqa/shell-error-01.png                  | Bin 0 -> 74682 bytes
 src/assets/fqa/shell-error-02.png                  | Bin 0 -> 330735 bytes
 src/assets/fqa/shell-error-03.png                  | Bin 0 -> 1624375 bytes
 src/assets/fqa/shell-error-04.png                  | Bin 0 -> 803920 bytes
 src/assets/fqa/shell-error-05.png                  | Bin 0 -> 179543 bytes
 src/docs/deploy/distributed_zh.md                  |   2 +-
 src/docs/deploy/linkis_en.md                       |   2 +-
 src/docs/manual/ConsoleUserManual_en.md            |  30 +++++++-------
 src/docs/manual/ConsoleUserManual_zh.md            |  30 +++++++-------
 src/docs/manual/HowToUse_en.md                     |   6 +--
 src/docs/manual/HowToUse_zh.md                     |   6 +--
 src/pages/faq/faq_en.md                            |  44 ++++++++++-----------
 src/pages/faq/faq_zh.md                            |  44 ++++++++++-----------
 174 files changed, 82 insertions(+), 82 deletions(-)

diff --git a/src/assets/docs/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png b/src/assets/docs/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png
new file mode 100644
index 0000000..2e71b42
Binary files /dev/null and b/src/assets/docs/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png differ
diff --git a/src/assets/docs/Architecture/EngineConn/engineconn-01.png b/src/assets/docs/Architecture/EngineConn/engineconn-01.png
new file mode 100644
index 0000000..d95da89
Binary files /dev/null and b/src/assets/docs/Architecture/EngineConn/engineconn-01.png differ
diff --git a/src/assets/docs/Architecture/Gateway/gateway_server_dispatcher.png b/src/assets/docs/Architecture/Gateway/gateway_server_dispatcher.png
new file mode 100644
index 0000000..9cdc918
Binary files /dev/null and b/src/assets/docs/Architecture/Gateway/gateway_server_dispatcher.png differ
diff --git a/src/assets/docs/Architecture/Gateway/gateway_server_global.png b/src/assets/docs/Architecture/Gateway/gateway_server_global.png
new file mode 100644
index 0000000..584574e
Binary files /dev/null and b/src/assets/docs/Architecture/Gateway/gateway_server_global.png differ
diff --git a/src/assets/docs/Architecture/Gateway/gatway_websocket.png b/src/assets/docs/Architecture/Gateway/gatway_websocket.png
new file mode 100644
index 0000000..fcac318
Binary files /dev/null and b/src/assets/docs/Architecture/Gateway/gatway_websocket.png differ
diff --git a/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/execution.png b/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/execution.png
new file mode 100644
index 0000000..1abc43b
Binary files /dev/null and b/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/execution.png differ
diff --git a/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png b/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png
new file mode 100644
index 0000000..9de0a5d
Binary files /dev/null and b/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png differ
diff --git a/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/overall.png b/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/overall.png
new file mode 100644
index 0000000..68b5e19
Binary files /dev/null and b/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/overall.png differ
diff --git a/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png b/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png
new file mode 100644
index 0000000..7998704
Binary files /dev/null and b/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png differ
diff --git a/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png b/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png
new file mode 100644
index 0000000..c2dd9f3
Binary files /dev/null and b/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png differ
diff --git a/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/submission.png b/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/submission.png
new file mode 100644
index 0000000..f6bd9a9
Binary files /dev/null and b/src/assets/docs/Architecture/Job_submission_preparation_and_execution_process/submission.png differ
diff --git a/src/assets/docs/Architecture/LabelManager/label_manager_builder.png b/src/assets/docs/Architecture/LabelManager/label_manager_builder.png
new file mode 100644
index 0000000..4896981
Binary files /dev/null and b/src/assets/docs/Architecture/LabelManager/label_manager_builder.png differ
diff --git a/src/assets/docs/Architecture/LabelManager/label_manager_global.png b/src/assets/docs/Architecture/LabelManager/label_manager_global.png
new file mode 100644
index 0000000..ca4151a
Binary files /dev/null and b/src/assets/docs/Architecture/LabelManager/label_manager_global.png differ
diff --git a/src/assets/docs/Architecture/LabelManager/label_manager_scorer.png b/src/assets/docs/Architecture/LabelManager/label_manager_scorer.png
new file mode 100644
index 0000000..7213b0b
Binary files /dev/null and b/src/assets/docs/Architecture/LabelManager/label_manager_scorer.png differ
diff --git a/src/assets/docs/Architecture/Linkis0.X-NewEngine-architecture.png b/src/assets/docs/Architecture/Linkis0.X-NewEngine-architecture.png
new file mode 100644
index 0000000..57c83b3
Binary files /dev/null and b/src/assets/docs/Architecture/Linkis0.X-NewEngine-architecture.png differ
diff --git a/src/assets/docs/Architecture/Linkis0.X-services-list.png b/src/assets/docs/Architecture/Linkis0.X-services-list.png
new file mode 100644
index 0000000..c669abf
Binary files /dev/null and b/src/assets/docs/Architecture/Linkis0.X-services-list.png differ
diff --git a/src/assets/docs/Architecture/Linkis1.0-EngineConn-architecture.png b/src/assets/docs/Architecture/Linkis1.0-EngineConn-architecture.png
new file mode 100644
index 0000000..d95da89
Binary files /dev/null and b/src/assets/docs/Architecture/Linkis1.0-EngineConn-architecture.png differ
diff --git a/src/assets/docs/Architecture/Linkis1.0-NewEngine-architecture.png b/src/assets/docs/Architecture/Linkis1.0-NewEngine-architecture.png
new file mode 100644
index 0000000..b1d60bf
Binary files /dev/null and b/src/assets/docs/Architecture/Linkis1.0-NewEngine-architecture.png differ
diff --git a/src/assets/docs/Architecture/Linkis1.0-architecture.png b/src/assets/docs/Architecture/Linkis1.0-architecture.png
new file mode 100644
index 0000000..825672b
Binary files /dev/null and b/src/assets/docs/Architecture/Linkis1.0-architecture.png differ
diff --git a/src/assets/docs/Architecture/Linkis1.0-newEngine-initialization.png b/src/assets/docs/Architecture/Linkis1.0-newEngine-initialization.png
new file mode 100644
index 0000000..003b38e
Binary files /dev/null and b/src/assets/docs/Architecture/Linkis1.0-newEngine-initialization.png differ
diff --git a/src/assets/docs/Architecture/Linkis1.0-services-list.png b/src/assets/docs/Architecture/Linkis1.0-services-list.png
new file mode 100644
index 0000000..f768545
Binary files /dev/null and b/src/assets/docs/Architecture/Linkis1.0-services-list.png differ
diff --git a/src/assets/docs/Architecture/PublicEnhencementArchitecture.png b/src/assets/docs/Architecture/PublicEnhencementArchitecture.png
new file mode 100644
index 0000000..bcf72a5
Binary files /dev/null and b/src/assets/docs/Architecture/PublicEnhencementArchitecture.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png
new file mode 100644
index 0000000..f61c49a
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png
new file mode 100644
index 0000000..a2e1022
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png
new file mode 100644
index 0000000..5f4272f
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png
new file mode 100644
index 0000000..9bb177a
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png
new file mode 100644
index 0000000..00d1f4a
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png
new file mode 100644
index 0000000..439c8e2
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png
new file mode 100644
index 0000000..081d514
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png
new file mode 100644
index 0000000..e343579
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png
new file mode 100644
index 0000000..012eb65
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png
new file mode 100644
index 0000000..c3a43b9
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png
new file mode 100644
index 0000000..719599a
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png
new file mode 100644
index 0000000..2277a70
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png
new file mode 100644
index 0000000..df58d96
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png
new file mode 100644
index 0000000..1e13445
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png
new file mode 100644
index 0000000..7e410fb
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png
new file mode 100644
index 0000000..097b7f1
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png
new file mode 100644
index 0000000..7a4d462
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png
new file mode 100644
index 0000000..fdd6623
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png
new file mode 100644
index 0000000..b366462
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png
new file mode 100644
index 0000000..2a1e403
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png
new file mode 100644
index 0000000..32336eb
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png
new file mode 100644
index 0000000..fdb60fc
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png
new file mode 100644
index 0000000..45dcc43
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png
new file mode 100644
index 0000000..2175704
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png
new file mode 100644
index 0000000..9d357af
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png
new file mode 100644
index 0000000..b08efd3
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png
new file mode 100644
index 0000000..13ca37e
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png
new file mode 100644
index 0000000..36a4d96
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png differ
diff --git a/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png
new file mode 100644
index 0000000..0a5ae1d
Binary files /dev/null and b/src/assets/docs/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png differ
diff --git a/src/assets/docs/Architecture/bml-02.png b/src/assets/docs/Architecture/bml-02.png
new file mode 100644
index 0000000..fed79f7
Binary files /dev/null and b/src/assets/docs/Architecture/bml-02.png differ
diff --git a/src/assets/docs/Architecture/linkis-engineConnPlugin-01.png b/src/assets/docs/Architecture/linkis-engineConnPlugin-01.png
new file mode 100644
index 0000000..2d2d134
Binary files /dev/null and b/src/assets/docs/Architecture/linkis-engineConnPlugin-01.png differ
diff --git a/src/assets/docs/Architecture/linkis-intro-01.png b/src/assets/docs/Architecture/linkis-intro-01.png
new file mode 100644
index 0000000..60b575d
Binary files /dev/null and b/src/assets/docs/Architecture/linkis-intro-01.png differ
diff --git a/src/assets/docs/Architecture/linkis-intro-02.png b/src/assets/docs/Architecture/linkis-intro-02.png
new file mode 100644
index 0000000..a31e681
Binary files /dev/null and b/src/assets/docs/Architecture/linkis-intro-02.png differ
diff --git a/src/assets/docs/Architecture/linkis-microservice-gov-01.png b/src/assets/docs/Architecture/linkis-microservice-gov-01.png
new file mode 100644
index 0000000..ac46424
Binary files /dev/null and b/src/assets/docs/Architecture/linkis-microservice-gov-01.png differ
diff --git a/src/assets/docs/Architecture/linkis-microservice-gov-03.png b/src/assets/docs/Architecture/linkis-microservice-gov-03.png
new file mode 100644
index 0000000..b53c8e1
Binary files /dev/null and b/src/assets/docs/Architecture/linkis-microservice-gov-03.png differ
diff --git a/src/assets/docs/Architecture/linkis-publicService-01.png b/src/assets/docs/Architecture/linkis-publicService-01.png
new file mode 100644
index 0000000..d503573
Binary files /dev/null and b/src/assets/docs/Architecture/linkis-publicService-01.png differ
diff --git a/src/assets/docs/EngineUsage/hive-config.png b/src/assets/docs/EngineUsage/hive-config.png
new file mode 100644
index 0000000..9b3df01
Binary files /dev/null and b/src/assets/docs/EngineUsage/hive-config.png differ
diff --git a/src/assets/docs/EngineUsage/hive-run.png b/src/assets/docs/EngineUsage/hive-run.png
new file mode 100644
index 0000000..287b1ab
Binary files /dev/null and b/src/assets/docs/EngineUsage/hive-run.png differ
diff --git a/src/assets/docs/EngineUsage/jdbc-conf.png b/src/assets/docs/EngineUsage/jdbc-conf.png
new file mode 100644
index 0000000..39397d3
Binary files /dev/null and b/src/assets/docs/EngineUsage/jdbc-conf.png differ
diff --git a/src/assets/docs/EngineUsage/jdbc-run.png b/src/assets/docs/EngineUsage/jdbc-run.png
new file mode 100644
index 0000000..fe51598
Binary files /dev/null and b/src/assets/docs/EngineUsage/jdbc-run.png differ
diff --git a/src/assets/docs/EngineUsage/pyspakr-run.png b/src/assets/docs/EngineUsage/pyspakr-run.png
new file mode 100644
index 0000000..c80c85b
Binary files /dev/null and b/src/assets/docs/EngineUsage/pyspakr-run.png differ
diff --git a/src/assets/docs/EngineUsage/python-config.png b/src/assets/docs/EngineUsage/python-config.png
new file mode 100644
index 0000000..2bf1791
Binary files /dev/null and b/src/assets/docs/EngineUsage/python-config.png differ
diff --git a/src/assets/docs/EngineUsage/python-run.png b/src/assets/docs/EngineUsage/python-run.png
new file mode 100644
index 0000000..65467af
Binary files /dev/null and b/src/assets/docs/EngineUsage/python-run.png differ
diff --git a/src/assets/docs/EngineUsage/queue-set.png b/src/assets/docs/EngineUsage/queue-set.png
new file mode 100644
index 0000000..735a670
Binary files /dev/null and b/src/assets/docs/EngineUsage/queue-set.png differ
diff --git a/src/assets/docs/EngineUsage/scala-run.png b/src/assets/docs/EngineUsage/scala-run.png
new file mode 100644
index 0000000..7c01aad
Binary files /dev/null and b/src/assets/docs/EngineUsage/scala-run.png differ
diff --git a/src/assets/docs/EngineUsage/shell-run.png b/src/assets/docs/EngineUsage/shell-run.png
new file mode 100644
index 0000000..734bdb2
Binary files /dev/null and b/src/assets/docs/EngineUsage/shell-run.png differ
diff --git a/src/assets/docs/EngineUsage/spark-conf.png b/src/assets/docs/EngineUsage/spark-conf.png
new file mode 100644
index 0000000..353dbd6
Binary files /dev/null and b/src/assets/docs/EngineUsage/spark-conf.png differ
diff --git a/src/assets/docs/EngineUsage/sparksql-run.png b/src/assets/docs/EngineUsage/sparksql-run.png
new file mode 100644
index 0000000..f0b1d1b
Binary files /dev/null and b/src/assets/docs/EngineUsage/sparksql-run.png differ
diff --git a/src/assets/docs/EngineUsage/workflow.png b/src/assets/docs/EngineUsage/workflow.png
new file mode 100644
index 0000000..3a5919f
Binary files /dev/null and b/src/assets/docs/EngineUsage/workflow.png differ
diff --git a/src/assets/docs/Linkis_1.0_architecture.png b/src/assets/docs/Linkis_1.0_architecture.png
new file mode 100644
index 0000000..9b6cc90
Binary files /dev/null and b/src/assets/docs/Linkis_1.0_architecture.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/Q&A.png b/src/assets/docs/Tuning_and_Troubleshooting/Q&A.png
new file mode 100644
index 0000000..121d7f3
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/Q&A.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/code-fix-01.png b/src/assets/docs/Tuning_and_Troubleshooting/code-fix-01.png
new file mode 100644
index 0000000..27bdddb
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/code-fix-01.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/db-config-01.png b/src/assets/docs/Tuning_and_Troubleshooting/db-config-01.png
new file mode 100644
index 0000000..fa1f1c8
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/db-config-01.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/db-config-02.png b/src/assets/docs/Tuning_and_Troubleshooting/db-config-02.png
new file mode 100644
index 0000000..c2f8443
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/db-config-02.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/debug-01.png b/src/assets/docs/Tuning_and_Troubleshooting/debug-01.png
new file mode 100644
index 0000000..9834b3d
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/debug-01.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/debug-02.png b/src/assets/docs/Tuning_and_Troubleshooting/debug-02.png
new file mode 100644
index 0000000..c7621b5
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/debug-02.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/debug-03.png b/src/assets/docs/Tuning_and_Troubleshooting/debug-03.png
new file mode 100644
index 0000000..16788c3
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/debug-03.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/debug-04.png b/src/assets/docs/Tuning_and_Troubleshooting/debug-04.png
new file mode 100644
index 0000000..cb944ee
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/debug-04.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/debug-05.png b/src/assets/docs/Tuning_and_Troubleshooting/debug-05.png
new file mode 100644
index 0000000..2c5972c
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/debug-05.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/debug-06.png b/src/assets/docs/Tuning_and_Troubleshooting/debug-06.png
new file mode 100644
index 0000000..a64cec6
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/debug-06.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/debug-07.png b/src/assets/docs/Tuning_and_Troubleshooting/debug-07.png
new file mode 100644
index 0000000..935d5bc
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/debug-07.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/debug-08.png b/src/assets/docs/Tuning_and_Troubleshooting/debug-08.png
new file mode 100644
index 0000000..d2a3328
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/debug-08.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/hive-config-01.png b/src/assets/docs/Tuning_and_Troubleshooting/hive-config-01.png
new file mode 100644
index 0000000..6bd0edb
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/hive-config-01.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-01.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-01.png
new file mode 100644
index 0000000..01090d1
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-01.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-02.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-02.png
new file mode 100644
index 0000000..0f68f12
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-02.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-03.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-03.png
new file mode 100644
index 0000000..8fb4464
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-03.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-04.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-04.png
new file mode 100644
index 0000000..5635a20
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-04.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-05.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-05.png
new file mode 100644
index 0000000..c341a9d
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-05.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-06.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-06.png
new file mode 100644
index 0000000..b0624ef
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-06.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-07.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-07.png
new file mode 100644
index 0000000..402f0c9
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-07.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-08.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-08.png
new file mode 100644
index 0000000..27c1824
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-08.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-09.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-09.png
new file mode 100644
index 0000000..5b27b4b
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-09.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-10.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-10.png
new file mode 100644
index 0000000..7c361e7
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-10.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/page-show-01.png b/src/assets/docs/Tuning_and_Troubleshooting/page-show-01.png
new file mode 100644
index 0000000..d953cb6
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/page-show-01.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/page-show-02.png b/src/assets/docs/Tuning_and_Troubleshooting/page-show-02.png
new file mode 100644
index 0000000..af273bb
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/page-show-02.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/page-show-03.png b/src/assets/docs/Tuning_and_Troubleshooting/page-show-03.png
new file mode 100644
index 0000000..c36bb30
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/page-show-03.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/searching_keywords.png b/src/assets/docs/Tuning_and_Troubleshooting/searching_keywords.png
new file mode 100644
index 0000000..cada716
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/searching_keywords.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/shell-error-01.png b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-01.png
new file mode 100644
index 0000000..910150e
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-01.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/shell-error-02.png b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-02.png
new file mode 100644
index 0000000..71d5e7e
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-02.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/shell-error-03.png b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-03.png
new file mode 100644
index 0000000..4bb9cfe
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-03.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/shell-error-04.png b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-04.png
new file mode 100644
index 0000000..c2df857
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-04.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/shell-error-05.png b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-05.png
new file mode 100644
index 0000000..3635584
Binary files /dev/null and b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-05.png differ
diff --git a/src/assets/docs/Tunning_And_Troubleshooting/debug-01.png b/src/assets/docs/Tunning_And_Troubleshooting/debug-01.png
new file mode 100644
index 0000000..9834b3d
Binary files /dev/null and b/src/assets/docs/Tunning_And_Troubleshooting/debug-01.png differ
diff --git a/src/assets/docs/Tunning_And_Troubleshooting/debug-02.png b/src/assets/docs/Tunning_And_Troubleshooting/debug-02.png
new file mode 100644
index 0000000..c7621b5
Binary files /dev/null and b/src/assets/docs/Tunning_And_Troubleshooting/debug-02.png differ
diff --git a/src/assets/docs/Tunning_And_Troubleshooting/debug-03.png b/src/assets/docs/Tunning_And_Troubleshooting/debug-03.png
new file mode 100644
index 0000000..16788c3
Binary files /dev/null and b/src/assets/docs/Tunning_And_Troubleshooting/debug-03.png differ
diff --git a/src/assets/docs/Tunning_And_Troubleshooting/debug-04.png b/src/assets/docs/Tunning_And_Troubleshooting/debug-04.png
new file mode 100644
index 0000000..cb944ee
Binary files /dev/null and b/src/assets/docs/Tunning_And_Troubleshooting/debug-04.png differ
diff --git a/src/assets/docs/Tunning_And_Troubleshooting/debug-05.png b/src/assets/docs/Tunning_And_Troubleshooting/debug-05.png
new file mode 100644
index 0000000..2c5972c
Binary files /dev/null and b/src/assets/docs/Tunning_And_Troubleshooting/debug-05.png differ
diff --git a/src/assets/docs/Tunning_And_Troubleshooting/debug-06.png b/src/assets/docs/Tunning_And_Troubleshooting/debug-06.png
new file mode 100644
index 0000000..a64cec6
Binary files /dev/null and b/src/assets/docs/Tunning_And_Troubleshooting/debug-06.png differ
diff --git a/src/assets/docs/Tunning_And_Troubleshooting/debug-07.png b/src/assets/docs/Tunning_And_Troubleshooting/debug-07.png
new file mode 100644
index 0000000..935d5bc
Binary files /dev/null and b/src/assets/docs/Tunning_And_Troubleshooting/debug-07.png differ
diff --git a/src/assets/docs/Tunning_And_Troubleshooting/debug-08.png b/src/assets/docs/Tunning_And_Troubleshooting/debug-08.png
new file mode 100644
index 0000000..d2a3328
Binary files /dev/null and b/src/assets/docs/Tunning_And_Troubleshooting/debug-08.png differ
diff --git a/src/assets/docs/deploy/distributed_deployment.png b/src/assets/docs/deploy/distributed_deployment.png
new file mode 100644
index 0000000..8cd86c5
Binary files /dev/null and b/src/assets/docs/deploy/distributed_deployment.png differ
diff --git a/src/assets/docs/deployment/Linkis1.0_combined_eureka.png b/src/assets/docs/deployment/Linkis1.0_combined_eureka.png
new file mode 100644
index 0000000..809dbee
Binary files /dev/null and b/src/assets/docs/deployment/Linkis1.0_combined_eureka.png differ
diff --git a/src/assets/docs/manual/ECM_all_engine_information.png b/src/assets/docs/manual/ECM_all_engine_information.png
new file mode 100644
index 0000000..a182e84
Binary files /dev/null and b/src/assets/docs/manual/ECM_all_engine_information.png differ
diff --git a/src/assets/docs/manual/ECM_editing_interface.png b/src/assets/docs/manual/ECM_editing_interface.png
new file mode 100644
index 0000000..e611e3e
Binary files /dev/null and b/src/assets/docs/manual/ECM_editing_interface.png differ
diff --git a/src/assets/docs/manual/ECM_management_interface.png b/src/assets/docs/manual/ECM_management_interface.png
new file mode 100644
index 0000000..4764732
Binary files /dev/null and b/src/assets/docs/manual/ECM_management_interface.png differ
diff --git a/src/assets/docs/manual/administrator_view.png b/src/assets/docs/manual/administrator_view.png
new file mode 100644
index 0000000..f5b7041
Binary files /dev/null and b/src/assets/docs/manual/administrator_view.png differ
diff --git a/src/assets/docs/manual/click_the_instance_name_to_view_engine_information.png b/src/assets/docs/manual/click_the_instance_name_to_view_engine_information.png
new file mode 100644
index 0000000..2ecd27c
Binary files /dev/null and b/src/assets/docs/manual/click_the_instance_name_to_view_engine_information.png differ
diff --git a/src/assets/docs/manual/edit_directory.png b/src/assets/docs/manual/edit_directory.png
new file mode 100644
index 0000000..7a30e3e
Binary files /dev/null and b/src/assets/docs/manual/edit_directory.png differ
diff --git a/src/assets/docs/manual/eureka_registration_center.png b/src/assets/docs/manual/eureka_registration_center.png
new file mode 100644
index 0000000..9585c20
Binary files /dev/null and b/src/assets/docs/manual/eureka_registration_center.png differ
diff --git a/src/assets/docs/manual/global_history_interface.png b/src/assets/docs/manual/global_history_interface.png
new file mode 100644
index 0000000..59eee9b
Binary files /dev/null and b/src/assets/docs/manual/global_history_interface.png differ
diff --git a/src/assets/docs/manual/global_history_query_button.png b/src/assets/docs/manual/global_history_query_button.png
new file mode 100644
index 0000000..eec31de
Binary files /dev/null and b/src/assets/docs/manual/global_history_query_button.png differ
diff --git a/src/assets/docs/manual/global_variable_interface.png b/src/assets/docs/manual/global_variable_interface.png
new file mode 100644
index 0000000..89b1cf2
Binary files /dev/null and b/src/assets/docs/manual/global_variable_interface.png differ
diff --git a/src/assets/docs/manual/microservice_management_interface.png b/src/assets/docs/manual/microservice_management_interface.png
new file mode 100644
index 0000000..593edb4
Binary files /dev/null and b/src/assets/docs/manual/microservice_management_interface.png differ
diff --git a/src/assets/docs/manual/new_application_type.png b/src/assets/docs/manual/new_application_type.png
new file mode 100644
index 0000000..f260c3d
Binary files /dev/null and b/src/assets/docs/manual/new_application_type.png differ
diff --git a/src/assets/docs/manual/parameter_configuration_interface.png b/src/assets/docs/manual/parameter_configuration_interface.png
new file mode 100644
index 0000000..deadf64
Binary files /dev/null and b/src/assets/docs/manual/parameter_configuration_interface.png differ
diff --git a/src/assets/docs/manual/queue-set.png b/src/assets/docs/manual/queue-set.png
new file mode 100644
index 0000000..735a670
Binary files /dev/null and b/src/assets/docs/manual/queue-set.png differ
diff --git a/src/assets/docs/manual/resource_management_interface.png b/src/assets/docs/manual/resource_management_interface.png
new file mode 100644
index 0000000..918bd08
Binary files /dev/null and b/src/assets/docs/manual/resource_management_interface.png differ
diff --git a/src/assets/docs/manual/sparksql-run.png b/src/assets/docs/manual/sparksql-run.png
new file mode 100644
index 0000000..f0b1d1b
Binary files /dev/null and b/src/assets/docs/manual/sparksql-run.png differ
diff --git a/src/assets/docs/manual/task_execution_log_of_a_single_task.png b/src/assets/docs/manual/task_execution_log_of_a_single_task.png
new file mode 100644
index 0000000..ff0ed86
Binary files /dev/null and b/src/assets/docs/manual/task_execution_log_of_a_single_task.png differ
diff --git a/src/assets/docs/manual/workflow.png b/src/assets/docs/manual/workflow.png
new file mode 100644
index 0000000..3a5919f
Binary files /dev/null and b/src/assets/docs/manual/workflow.png differ
diff --git a/src/assets/docs/wedatasphere_contact_01.png b/src/assets/docs/wedatasphere_contact_01.png
new file mode 100644
index 0000000..5a3d80e
Binary files /dev/null and b/src/assets/docs/wedatasphere_contact_01.png differ
diff --git a/src/assets/docs/wedatasphere_stack_Linkis.png b/src/assets/docs/wedatasphere_stack_Linkis.png
new file mode 100644
index 0000000..36060b9
Binary files /dev/null and b/src/assets/docs/wedatasphere_stack_Linkis.png differ
diff --git a/src/assets/fqa/Q&A.png b/src/assets/fqa/Q&A.png
new file mode 100644
index 0000000..121d7f3
Binary files /dev/null and b/src/assets/fqa/Q&A.png differ
diff --git a/src/assets/fqa/code-fix-01.png b/src/assets/fqa/code-fix-01.png
new file mode 100644
index 0000000..27bdddb
Binary files /dev/null and b/src/assets/fqa/code-fix-01.png differ
diff --git a/src/assets/fqa/db-config-01.png b/src/assets/fqa/db-config-01.png
new file mode 100644
index 0000000..fa1f1c8
Binary files /dev/null and b/src/assets/fqa/db-config-01.png differ
diff --git a/src/assets/fqa/db-config-02.png b/src/assets/fqa/db-config-02.png
new file mode 100644
index 0000000..c2f8443
Binary files /dev/null and b/src/assets/fqa/db-config-02.png differ
diff --git a/src/assets/fqa/debug-01.png b/src/assets/fqa/debug-01.png
new file mode 100644
index 0000000..9834b3d
Binary files /dev/null and b/src/assets/fqa/debug-01.png differ
diff --git a/src/assets/fqa/debug-02.png b/src/assets/fqa/debug-02.png
new file mode 100644
index 0000000..c7621b5
Binary files /dev/null and b/src/assets/fqa/debug-02.png differ
diff --git a/src/assets/fqa/debug-03.png b/src/assets/fqa/debug-03.png
new file mode 100644
index 0000000..16788c3
Binary files /dev/null and b/src/assets/fqa/debug-03.png differ
diff --git a/src/assets/fqa/debug-04.png b/src/assets/fqa/debug-04.png
new file mode 100644
index 0000000..cb944ee
Binary files /dev/null and b/src/assets/fqa/debug-04.png differ
diff --git a/src/assets/fqa/debug-05.png b/src/assets/fqa/debug-05.png
new file mode 100644
index 0000000..2c5972c
Binary files /dev/null and b/src/assets/fqa/debug-05.png differ
diff --git a/src/assets/fqa/debug-06.png b/src/assets/fqa/debug-06.png
new file mode 100644
index 0000000..a64cec6
Binary files /dev/null and b/src/assets/fqa/debug-06.png differ
diff --git a/src/assets/fqa/debug-07.png b/src/assets/fqa/debug-07.png
new file mode 100644
index 0000000..935d5bc
Binary files /dev/null and b/src/assets/fqa/debug-07.png differ
diff --git a/src/assets/fqa/debug-08.png b/src/assets/fqa/debug-08.png
new file mode 100644
index 0000000..d2a3328
Binary files /dev/null and b/src/assets/fqa/debug-08.png differ
diff --git a/src/assets/fqa/hive-config-01.png b/src/assets/fqa/hive-config-01.png
new file mode 100644
index 0000000..6bd0edb
Binary files /dev/null and b/src/assets/fqa/hive-config-01.png differ
diff --git a/src/assets/fqa/linkis-exception-01.png b/src/assets/fqa/linkis-exception-01.png
new file mode 100644
index 0000000..01090d1
Binary files /dev/null and b/src/assets/fqa/linkis-exception-01.png differ
diff --git a/src/assets/fqa/linkis-exception-02.png b/src/assets/fqa/linkis-exception-02.png
new file mode 100644
index 0000000..0f68f12
Binary files /dev/null and b/src/assets/fqa/linkis-exception-02.png differ
diff --git a/src/assets/fqa/linkis-exception-03.png b/src/assets/fqa/linkis-exception-03.png
new file mode 100644
index 0000000..8fb4464
Binary files /dev/null and b/src/assets/fqa/linkis-exception-03.png differ
diff --git a/src/assets/fqa/linkis-exception-04.png b/src/assets/fqa/linkis-exception-04.png
new file mode 100644
index 0000000..5635a20
Binary files /dev/null and b/src/assets/fqa/linkis-exception-04.png differ
diff --git a/src/assets/fqa/linkis-exception-05.png b/src/assets/fqa/linkis-exception-05.png
new file mode 100644
index 0000000..c341a9d
Binary files /dev/null and b/src/assets/fqa/linkis-exception-05.png differ
diff --git a/src/assets/fqa/linkis-exception-06.png b/src/assets/fqa/linkis-exception-06.png
new file mode 100644
index 0000000..b0624ef
Binary files /dev/null and b/src/assets/fqa/linkis-exception-06.png differ
diff --git a/src/assets/fqa/linkis-exception-07.png b/src/assets/fqa/linkis-exception-07.png
new file mode 100644
index 0000000..402f0c9
Binary files /dev/null and b/src/assets/fqa/linkis-exception-07.png differ
diff --git a/src/assets/fqa/linkis-exception-08.png b/src/assets/fqa/linkis-exception-08.png
new file mode 100644
index 0000000..27c1824
Binary files /dev/null and b/src/assets/fqa/linkis-exception-08.png differ
diff --git a/src/assets/fqa/linkis-exception-09.png b/src/assets/fqa/linkis-exception-09.png
new file mode 100644
index 0000000..5b27b4b
Binary files /dev/null and b/src/assets/fqa/linkis-exception-09.png differ
diff --git a/src/assets/fqa/linkis-exception-10.png b/src/assets/fqa/linkis-exception-10.png
new file mode 100644
index 0000000..7c361e7
Binary files /dev/null and b/src/assets/fqa/linkis-exception-10.png differ
diff --git a/src/assets/fqa/page-show-01.png b/src/assets/fqa/page-show-01.png
new file mode 100644
index 0000000..d953cb6
Binary files /dev/null and b/src/assets/fqa/page-show-01.png differ
diff --git a/src/assets/fqa/page-show-02.png b/src/assets/fqa/page-show-02.png
new file mode 100644
index 0000000..af273bb
Binary files /dev/null and b/src/assets/fqa/page-show-02.png differ
diff --git a/src/assets/fqa/page-show-03.png b/src/assets/fqa/page-show-03.png
new file mode 100644
index 0000000..c36bb30
Binary files /dev/null and b/src/assets/fqa/page-show-03.png differ
diff --git a/src/assets/fqa/searching_keywords.png b/src/assets/fqa/searching_keywords.png
new file mode 100644
index 0000000..cada716
Binary files /dev/null and b/src/assets/fqa/searching_keywords.png differ
diff --git a/src/assets/fqa/shell-error-01.png b/src/assets/fqa/shell-error-01.png
new file mode 100644
index 0000000..910150e
Binary files /dev/null and b/src/assets/fqa/shell-error-01.png differ
diff --git a/src/assets/fqa/shell-error-02.png b/src/assets/fqa/shell-error-02.png
new file mode 100644
index 0000000..71d5e7e
Binary files /dev/null and b/src/assets/fqa/shell-error-02.png differ
diff --git a/src/assets/fqa/shell-error-03.png b/src/assets/fqa/shell-error-03.png
new file mode 100644
index 0000000..4bb9cfe
Binary files /dev/null and b/src/assets/fqa/shell-error-03.png differ
diff --git a/src/assets/fqa/shell-error-04.png b/src/assets/fqa/shell-error-04.png
new file mode 100644
index 0000000..c2df857
Binary files /dev/null and b/src/assets/fqa/shell-error-04.png differ
diff --git a/src/assets/fqa/shell-error-05.png b/src/assets/fqa/shell-error-05.png
new file mode 100644
index 0000000..3635584
Binary files /dev/null and b/src/assets/fqa/shell-error-05.png differ
diff --git a/src/docs/deploy/distributed_zh.md b/src/docs/deploy/distributed_zh.md
index c863777..67e82d6 100644
--- a/src/docs/deploy/distributed_zh.md
+++ b/src/docs/deploy/distributed_zh.md
@@ -97,4 +97,4 @@ EUREKA_URL=http://$EUREKA_INSTALL_IP:$EUREKA_PORT/eureka/,http:/server1:port/eur
 修改完之后启动微服务,从web端进入eureka注册界面,可以看到已经成功注册到eureka的微服务,并且DS
 Replicas也会显示集群相邻的副本节点。
 
-![](Images/分布式部署微服务.png)
+![](../../assets/docs/deploy/distributed_deployment.png)
diff --git a/src/docs/deploy/linkis_en.md b/src/docs/deploy/linkis_en.md
index b74dbd9..7cfe807 100644
--- a/src/docs/deploy/linkis_en.md
+++ b/src/docs/deploy/linkis_en.md
@@ -242,5 +242,5 @@ If you have not specified EUREKA_INSTALL_IP and EUREKA_INSTALL_IP in config.sh,
 
 As shown in the figure below, if all of the following micro-services are registered on theEureka, it means that they've started successfully and are able to work.
 
-![Linkis1.0_Eureka](../Images/deployment/Linkis1.0_combined_eureka.png)
+![Linkis1.0_Eureka](../../assets/docs/deploy/Linkis1.0_combined_eureka.png)
 
diff --git a/src/docs/manual/ConsoleUserManual_en.md b/src/docs/manual/ConsoleUserManual_en.md
index 1d6704e..a78ee77 100644
--- a/src/docs/manual/ConsoleUserManual_en.md
+++ b/src/docs/manual/ConsoleUserManual_en.md
@@ -34,20 +34,20 @@ Introduction to the functions and use of Computatoin Governance Console
 Global history
 --------
 
-> ![](Images/Global History Interface.png)
+> ![](../../assets/docs/manual/global_history_interface.png)
 
 
 > The global history interface provides the user's own linkis task submission record. The execution status of each task can be displayed here, and the reason for the failure of task execution can also be queried by clicking the view button on the left side of the task
 
-> ![./media/image2.png](Images/Global History Query Button.png)
+> ![./media/image2.png](../../assets/docs/manual/global_history_query_button.png)
 
 
-> ![./media/image3.png](Images/task execution log of a single task.png)
+> ![./media/image3.png](../../assets/docs/manual/task_execution_log_of_a_single_task.png)
 
 
 > For linkis computing management console administrators, the administrator can view the historical tasks of all users by clicking the switch administrator view on the page.
 
-> ![./media/image4.png](Images/Administrator View.png)
+> ![./media/image4.png](../../assets/docs/manual/administrator_view.png)
 
 
 Resource management
@@ -55,7 +55,7 @@ Resource management
 
 > In the resource management interface, the user can see the status of the engine currently started and the status of resource occupation, and can also stop the engine through the page.
 
-> ![./media/image5.png](Images/Resource Management Interface.png)
+> ![./media/image5.png](../../assets/docs/manual/resource_management_interface.png)
 
 
 Parameter configuration
@@ -63,17 +63,17 @@ Parameter configuration
 
 > The parameter configuration interface provides the function of user-defined parameter management. The user can manage the related configuration of the engine in this interface, and the administrator can add application types and engines here.
 
-> ![./media/image6.png](Images/parameter configuration interface.png)
+> ![./media/image6.png](../../assets/docs/manual/parameter_configuration_interface.png)
 
 
 > The user can expand all the configuration information in the directory by clicking on the application type at the top and then select the engine type in the application, modify the configuration information and click "Save" to take effect.
 
 > Edit catalog and new application types are only visible to the administrator. Click the edit button to delete the existing application and engine configuration (note! Deleting the application directly will delete all engine configurations under the application and cannot be restored), or add an engine, or click "New Application" to add a new application type.
 
-> ![./media/image7.png](Images/edit directory.png)
+> ![./media/image7.png](../../assets/docs/manual/edit_directory.png)
 
 
-> ![./media/image8.png](Images/New application type.png)
+> ![./media/image8.png](../../assets/docs/manual/new_application_type.png)
 
 
 Global variable
@@ -81,7 +81,7 @@ Global variable
 
 > In the global variable interface, users can customize variables for code writing, just click the edit button to add parameters.
 
-> ![./media/image9.png](Images/Global Variable Interface.png)
+> ![./media/image9.png](../../assets/docs/manual/global_variable_interface.png)
 
 
 ECM management
@@ -89,19 +89,19 @@ ECM management
 
 > The ECM management interface is used by the administrator to manage the ECM and all engines. This interface can view the status information of the ECM, modify the ECM label information, modify the ECM status information, and query all engine information under each ECM. And only the administrator can see, the administrator's configuration method can be viewed in the second chapter of this article.
 
-> ![./media/image10.png](Images/ECM management interface.png)
+> ![./media/image10.png](../../assets/docs/manual/ECM_management_interface.png)
 
 
 > Click the edit button to edit the label information of the ECM (only part of the labels are allowed to be edited) and modify the status of the ECM.
 
-> ![./media/image11.png](Images/ECM editing interface.png)
+> ![./media/image11.png](../../assets/docs/manual/ECM_editing_interface.png)
 
 
 > Click the instance name of the ECM to view all engine information under the ECM.
 
-> ![](Images/Click the instance name to view engine information.png)
+> ![](../../assets/docs/manual/click_the_instance_name_to_view_engine_information.png)
 
-> ![](All engine information under Images/ECM.png)
+> ![](../../assets/docs/manual/ECM_all_engine_information.png)
 
 > Similarly, you can stop the engine on this interface, and edit the label information of the engine.
 
@@ -110,9 +110,9 @@ Microservice management
 
 > The microservice management interface can view all microservice information under Linkis, and this interface is only visible to the administrator. Linkis's own microservices can be viewed by clicking on the Eureka registration center. The microservices associated with linkis will be listed directly on this interface.
 
-> ![](Images/microservice management interface.png)
+> ![](../../assets/docs/manual/microservice_management_interface.png)
 
-> ![](Images/Eureka registration center.png)
+> ![](../../assets/docs/manual/eureka_registration_center.png)
 
 common problem
 --------
diff --git a/src/docs/manual/ConsoleUserManual_zh.md b/src/docs/manual/ConsoleUserManual_zh.md
index 5f5b764..b4671f2 100644
--- a/src/docs/manual/ConsoleUserManual_zh.md
+++ b/src/docs/manual/ConsoleUserManual_zh.md
@@ -34,20 +34,20 @@
 全局历史
 --------
 
->   ![](Images/全局历史界面.png)
+>   ![](../../assets/docs/manual/global_history_interface.png)
 
 
 >   全局历史界面提供了用户自身的linkis任务提交记录,各个任务的执行状态都可以在此显示,任务执行的失败原因也可以点击任务左侧的查看按钮查询
 
->   ![./media/image2.png](Images/全局历史查询按钮.png)
+>   ![./media/image2.png](../../assets/docs/manual/global_history_query_button.png)
 
 
->   ![./media/image3.png](Images/单个任务的任务执行日志.png)
+>   ![./media/image3.png](../../assets/docs/manual/task_execution_log_of_a_single_task.png)
 
 
 >   对于linkis计算治理台管理员来说,管理员可以通过点击页面的切换管理员视图查看所有用户的历史任务。
 
->   ![./media/image4.png](Images/管理员视图.png)
+>   ![./media/image4.png](../../assets/docs/manual/administrator_view.png)
 
 
 资源管理
@@ -55,7 +55,7 @@
 
 >   在资源管理界面,用户可以看到自己当前启动的引擎状态,以及占用资源的情况,也能够通过页面停止引擎。
 
->   ![./media/image5.png](Images/资源管理界面.png)
+>   ![./media/image5.png](../../assets/docs/manual/resource_management_interface.png)
 
 
 参数配置
@@ -63,17 +63,17 @@
 
 >   参数配置界面提供了用户自定义参数管理的功能,用户可以在该界面管理引擎的相关配置,管理员还能在这里新增应用类型和引擎。
 
->   ![./media/image6.png](Images/参数配置界面.png)
+>   ![./media/image6.png](../../assets/docs/manual/parameter_configuration_interface.png)
 
 
 >   用户通过点击上方的应用类型,接着选择应用中拥有的引擎类型,即可展开该目录下的所有配置信息,修改配置信息点击保存即可生效。
 
 >   编辑目录和新增应用类型仅管理员可见,点击编辑按钮可以删除已有的应用和引擎配置(注意!直接删除应用会删除该应用下所有的引擎配置,并且不可恢复),或者添加引擎,点击新增应用可以添加应用类型。
 
->   ![./media/image7.png](Images/编辑目录.png)
+>   ![./media/image7.png](../../assets/docs/manual/edit_directory.png)
 
 
->   ![./media/image8.png](Images/新增应用类型.png)
+>   ![./media/image8.png](../../assets/docs/manual/new_application_type.png)
 
 
 全局变量
@@ -81,7 +81,7 @@
 
 >   全局变量界面用户可以自定义变量用于代码编写,点击编辑按钮新增参数即可。
 
->   ![./media/image9.png](Images/全局变量界面.png)
+>   ![./media/image9.png](../../assets/docs/manual/global_variable_interface.png)
 
 
 ECM管理
@@ -89,19 +89,19 @@ ECM管理
 
 >   ECM管理界面是用于管理员管理ECM和所有引擎的地方,该界面可以查看到ECM的状态信息、修改ECM标签信息、修改ECM状态信息以及查询各个ECM下的所有引擎信息。且仅管理员可见,管理员的配置方式可以在本文章第二大章节查看。
 
->   ![./media/image10.png](Images/ECM管理界面.png)
+>   ![./media/image10.png](../../assets/docs/manual/ECM_management_interface.png)
 
 
 >   点击编辑按钮,可以编辑ECM的标签信息(仅允许编辑部分标签),以及修改ECM的状态。
 
->   ![./media/image11.png](Images/ECM编辑界面.png)
+>   ![./media/image11.png](../../assets/docs/manual/ECM_editing_interface.png)
 
 
 >   点击ECM的实例名称,可以查看该ECM下所有的引擎信息。
 
->   ![](Images/点击实例名称查看引擎信息.png)
+>   ![](../../assets/docs/manual/click_the_instance_name_to_view_engine_information.png)
 
->   ![](Images/ECM下所有的引擎信息.png)
+>   ![](../../assets/docs/manual/ECM_all_engine_information.png)
 
 >   同样地,可以在该界面停止引擎,并且可以编辑引擎的标签信息。
 
@@ -110,9 +110,9 @@ ECM管理
 
 >   微服务管理界面可以查看Linkis下的所有微服务信息,该界面也仅允许管理员可见。linkis自身的微服务可以点击Eureka注册中心查看,与linkis关联的微服务会直接在该界面列出。
 
->   ![](Images/微服务管理界面.png)
+>   ![](../../assets/docs/manual/microservice_management_interface.png)
 
->   ![](Images/Eureka注册中心.png)
+>   ![](../../assets/docs/manual/eureka_registration_center.png)
 
 常见问题
 --------
diff --git a/src/docs/manual/HowToUse_en.md b/src/docs/manual/HowToUse_en.md
index 2b1172e..f450297 100644
--- a/src/docs/manual/HowToUse_en.md
+++ b/src/docs/manual/HowToUse_en.md
@@ -16,14 +16,14 @@
 ## 2.1. Use Scriptis to execute scripts
 
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Currently Scriptis supports submitting a variety of task types to Linkis, including Spark SQL, Hive SQL, Scala, PythonSpark, etc. In order to meet the needs of data analysis, the left side of Scriptis, Provides viewing user workspace information, user database and table information, user-defined functions, and HDFS directories. It also supports uploading and downloading, result set exporting and other functions. Scriptis is very simple to u [...]
-![Scriptis uses Linkis](../Images/EngineUsage/sparksql-run.png)
+![Scriptis uses Linkis](../../assets/docs/manual/sparksql-run.png)
 
 ## 2.2. Scriptis Management Console
 
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis provides an interface for resource configuration and management. If you want to configure and manage task resources, you can set it on the Scriptis management console interface, including queue settings and resource configuration , The number of engine instances, etc. Through the management console, you can easily configure the resources for submitting tasks to Linkis, making it more convenient and faster.
-![Scriptis uses Linkis](../Images/EngineUsage/queue-set.png)
+![Scriptis uses Linkis](../../assets/docs/manual/queue-set.png)
 
 ## 3. DataSphere Studio uses Linkis
 
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[**DataSphere Studio**](https://github.com/WeBankFinTech/DataSphereStudio), referred to as DSS, is an open source part of WeBank’s big data platform Station-type data analysis and processing platform, the DSS interactive analysis module integrates Scriptis. Using DSS for interactive analysis is the same as Scriptis. In addition to providing the basic functions of Scriptis, DSS provides and integrates richer and more powerful data analysis f [...]
-![DSS Run Workflow](../Images/EngineUsage/workflow.png)
+![DSS Run Workflow](../../assets/docs/manual/workflow.png)
diff --git a/src/docs/manual/HowToUse_zh.md b/src/docs/manual/HowToUse_zh.md
index dcdb96b..9bbc435 100644
--- a/src/docs/manual/HowToUse_zh.md
+++ b/src/docs/manual/HowToUse_zh.md
@@ -10,11 +10,11 @@
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;如果需要使用Linkis完成交互式在线分析处理的工作,并且不需要诸如工作流开发、工作流调度、数据服务等数据分析应用工具,可以单独安装[**Scriptis**](https://github.com/WeBankFinTech/Scriptis),详细安装教程可参考其对应的安装部署文档。  
 ## 2.1. 使用Scriptis执行脚本
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;目前Scriptis支持向Linkis提交多种任务类型,包括Spark SQL、Hive SQL、Scala、PythonSpark等,为了满足数据分析的需求,Scriptis左侧,提供查看用户工作空间信息、用户数据库和表信息、用户自定义函数,以及HDFS目录,同时支持上传下载,结果集导出等功能。Scriptis使用Linkis十分简单,可以很方便的在编辑栏书写脚本,提交到Linkis运行。  
-![Scriptis使用Linkis](../Images/EngineUsage/sparksql-run.png)
+![Scriptis使用Linkis](../../assets/docs/manual/sparksql-run.png)
 ## 2.2. Scriptis管理台
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis提供资源配置和管理的接口,如果希望对任务资源进行配置管理,可以在Scriptis的管理台界面进行设置,包括队列设置、资源配置、引擎实例个数等。通过管理台,可以很方便的配置向Linkis提交任务的资源,使得更加方便快捷。  
-![Scriptis使用Linkis](../Images/EngineUsage/queue-set.png)
+![Scriptis使用Linkis](../../assets/docs/manual/queue-set.png)
 
 ## 3. DataSphere Studio使用Linkis
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[**DataSphere Studio**](https://github.com/WeBankFinTech/DataSphereStudio)简称DSS,是微众银行大数据平台开源的一站式数据分析处理平台,DSS交互式分析模块集成了Scriptis,使用DSS进行交互式分析和Scriptis一样,除了提供Scriptis的基本功能外,DSS提供和集成了更加丰富和强大的数据分析功能,包括用于数据提取的数据服务、开发报表的工作流、可视化分析软件Visualis等。由于原生的支持,目前DSS是与Linkis集成度最高的软件,如果希望使用完整的Linkis功能,建议使用DSS搭配Linkis一起使用。  
-![DSS运行工作流](../Images/EngineUsage/workflow.png)
+![DSS运行工作流](../../assets/docs/manual/workflow.png)
diff --git a/src/pages/faq/faq_en.md b/src/pages/faq/faq_en.md
index d2616c2..8beb0d7 100644
--- a/src/pages/faq/faq_en.md
+++ b/src/pages/faq/faq_en.md
@@ -12,7 +12,7 @@ Solution: jetty-servlet and jetty-security versions need to be upgraded from 9.3
 
 Specific exception stack:
 
-![linkis-exception-01.png](../Images/Tuning_and_Troubleshooting/linkis-exception-01.png)
+![linkis-exception-01.png](../../assets/fqa/linkis-exception-01.png)
 
 Solution: jar package conflict, delete asm-5.0.4.jar;
 
@@ -20,64 +20,64 @@ Solution: jar package conflict, delete asm-5.0.4.jar;
 
 Specific exception stack:
 
-![linkis-exception-02.png](../Images/Tuning_and_Troubleshooting/linkis-exception-02.png)
+![linkis-exception-02.png](../../assets/fqa/linkis-exception-02.png)
 
 
 Solution: caused by the Linkis-datasource configuration problem, modify the three parameters at the beginning of linkis.properties hive.meta:
 
-![hive-config-01.png](../Images/Tuning_and_Troubleshooting/hive-config-01.png)
+![hive-config-01.png](../../assets/fqa/hive-config-01.png)
 
 
 #### Q4. When starting the microservice linkis-ps-datasource, the following exception ClassNotFoundException HttpClient is reported:
 
 Specific exception stack:
 
-![linkis-exception-03.png](../Images/Tuning_and_Troubleshooting/linkis-exception-03.png)
+![linkis-exception-03.png](../../assets/fqa/linkis-exception-03.png)
 
 Solution: There is a problem with linkis-metadata-dev-1.0.0.jar compiled in 1.0, and it needs to be recompiled and packaged.
 
 #### Q5. Click scriptis-database, no data is returned, the phenomenon is as follows:
 
-![page-show-01.png](../Images/Tuning_and_Troubleshooting/page-show-01.png)
+![page-show-01.png](../../assets/fqa/page-show-01.png)
 
 Solution: The reason is that hive is not authorized to Hadoop users. The authorization data is as follows:
 
-![db-config-01.png](../Images/Tuning_and_Troubleshooting/db-config-01.png)
+![db-config-01.png](../../assets/fqa/db-config-01.png)
 
 #### Q6, shell engine scheduling execution, the page reports Insufficient resource, requesting available engine timeout, eneningeconnmanager linkis.out, and the following error is reported:
 
-![linkis-exception-04.png](../Images/Tuning_and_Troubleshooting/linkis-exception-04.png)
+![linkis-exception-04.png](../../assets/fqa/linkis-exception-04.png)
 
 Solution: The reason Hadoop did not create /appcom/tmp/hadoop/workDir. Create it in advance through the root user, and then authorize the Hadoop user.
 
 #### Q7. When the shell engine is scheduled for execution, the engine execution directory reports the following error /bin/java: No such file or directory:
 
-![shell-error-01.png](../Images/Tuning_and_Troubleshooting/shell-error-01.png)
+![shell-error-01.png](../../assets/fqa/shell-error-01.png)
 
 Solution: There is a problem with the local java environment variables, and you need to make a symbolic link to the java command.
 
 #### Q8, hive engine scheduling, the following error is reported EngineConnPluginNotFoundException:errorCode:70063
 
-![linkis-exception-05.png](../Images/Tuning_and_Troubleshooting/linkis-exception-05.png)
+![linkis-exception-05.png](../../assets/fqa/linkis-exception-05.png)
 
 Solution: It is caused by not modifying the version of the corresponding engine during installation, so the engine type inserted into the db by default is the default version, and the compiled version is not caused by the default version. Specific modification steps: cd /appcom/Install/dss-linkis/linkis/lib/linkis-engineconn-plugins/, modify the v2.1.1 directory name in the dist directory to v1.2.1 modify the subdirectory name in the plugin directory 2.1. 1 is 1.2.1 of the default versio [...]
 
 #### Q9. After the linkis microservice is started, the following error is reported: Load balancer does not have available server for client:
 
-![page-show-02.png](../Images/Tuning_and_Troubleshooting/page-show-02.png)
+![page-show-02.png](../../assets/fqa/page-show-02.png)
 
 Solution: This is because the linkis microservice has just started and the registration has not been completed. Wait for 1~2 minutes and try again.
 
 #### Q10. When the hive engine is scheduled for execution, the following error is reported: operation failed NullPointerException:
 
-![linkis-exception-06.png](../Images/Tuning_and_Troubleshooting/linkis-exception-06.png)
+![linkis-exception-06.png](../../assets/fqa/linkis-exception-06.png)
 
 
 Solution: The server lacks environment variables, add export HIVE_CONF_DIR=/etc/hive/conf in /etc/profile;
 
 #### Q11. When hive engine is scheduled, the error log of engineConnManager is as follows method did not exist: SessionHandler:
 
-![linkis-exception-07.png](../Images/Tuning_and_Troubleshooting/linkis-exception-07.png)
+![linkis-exception-07.png](../../assets/fqa/linkis-exception-07.png)
 
 Solution: Under the hive engine lib, the jetty jar package conflicts, replace jetty-security and jetty-server with 9.4.20;
 
@@ -156,11 +156,11 @@ Solution: The reason is that there is a corresponding relationship between the v
 
 #### Q15. When the python engine is scheduled, the following error is reported: Python proces is not alive:
 
-![linkis-exception-08.png](../Images/Tuning_and_Troubleshooting/linkis-exception-08.png)
+![linkis-exception-08.png](../../assets/fqa/linkis-exception-08.png)
 
 Solution: The server installed the anaconda3 package manager. After debugging python, two problems were found: (1) lack of pandas and matplotlib modules, which need to be installed manually; (2) when the new version of the python engine is executed, it depends on the higher version of python, first install python3, Next, make a symbolic link (as shown in the figure below) and restart the engineplugin service.
 
-![shell-error-02.png](../Images/Tuning_and_Troubleshooting/shell-error-02.png)
+![shell-error-02.png](../../assets/fqa/shell-error-02.png)
 
 #### Q16. When the spark engine is executed, the following error NoClassDefFoundError: org/apache/hadoop/hive/ql/io/orc/OrcFile is reported:
 
@@ -212,7 +212,7 @@ Solution: cdh6.3.2 cluster spark engine classpath only has /opt/cloudera/parcels
 
 #### Q17. When the spark engine starts, it reports queue default is not exists in YARN, the specific information is as follows:
 
-![linkis-exception-09.png](../Images/Tuning_and_Troubleshooting/linkis-exception-09.png)
+![linkis-exception-09.png](../../assets/fqa/linkis-exception-09.png)
 
 Solution: When the 1.0 linkis-resource-manager-dev-1.0.0.jar pulls queue information, there is a compatibility problem in parsing json. After the official classmates optimize it, re-provide a new package. The jar package path: /appcom/Install/dss- linkis/linkis/lib/linkis-computation-governance/linkis-cg-linkismanager/.
 
@@ -220,35 +220,35 @@ Solution: When the 1.0 linkis-resource-manager-dev-1.0.0.jar pulls queue informa
 
 Solution: To migrate the address configuration of yarn to the DB configuration, the following configuration needs to be added:
  
-![db-config-02.png](../Images/Tuning_and_Troubleshooting/db-config-02.png)
+![db-config-02.png](../../assets/fqa/db-config-02.png)
 
 #### Q19. When the spark engine is scheduled, it can be executed successfully for the first time, and if executed again, it will report Spark application sc has already stopped, please restart it. The specific errors are as follows:
 
-![page-show-03.png](../Images/Tuning_and_Troubleshooting/page-show-03.png)
+![page-show-03.png](../../assets/fqa/page-show-03.png)
 
 Solution: The background is that the architecture of the linkis1.0 engine has been adjusted. After the spark session is created, in order to avoid overhead and improve execution efficiency, the session is reused. When we execute spark.scala for the first time, there is spark.stop() in our script. This command will cause the newly created session to be closed. When executed again, it will prompt that the session is closed, please restart it. Solution: first remove stop() from all scripts, [...]
 
 #### Q20, pythonspark scheduling execution, error: initialize python executor failed ClassNotFoundException org.slf4j.impl.StaticLoggerBinder, as follows:
 
-![linkis-exception-10.png](../Images/Tuning_and_Troubleshooting/linkis-exception-10.png)
+![linkis-exception-10.png](../../assets/fqa/linkis-exception-10.png)
 
 Solution: The reason is that the spark server lacks slf4j-log4j12-1.7.25.jar, copy the above jar and report to /opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/spark/jars .
 
 #### Q21, pythonspark scheduling execution, error: initialize python executor failed, submit-version error, as follows:
 
-![shell-error-03.png](../Images/Tuning_and_Troubleshooting/shell-error-03.png)
+![shell-error-03.png](../../assets/fqa/shell-error-03.png)
 
 Solution: The reason is that the linkis1.0 pythonSpark engine has a bug in obtaining the spark version code. The fix is ​​as follows:
 
-![code-fix-01.png](../Images/Tuning_and_Troubleshooting/code-fix-01.png)
+![code-fix-01.png](../../assets/fqa/code-fix-01.png)
 
 #### Q22. When pythonspark is scheduled to execute, it reports TypeError: an integer is required (got type bytes) (executed separately from the command to pull up the engine), the details are as follows:
 
-![shell-error-04.png](../Images/Tuning_and_Troubleshooting/shell-error-04.png)
+![shell-error-04.png](../../assets/fqa/shell-error-04.png)
 
 Solution: The reason is that the system spark and python versions are not compatible, python is 3.8, spark is 2.4.0-cdh6.3.2, spark requires python version<=3.6, reduce python to 3.6, comment file /opt/cloudera/parcels/CDH/ The following lines of lib/spark/python/lib/pyspark.zip/pyspark/context.py:
 
-![shell-error-05.png](../Images/Tuning_and_Troubleshooting/shell-error-05.png)
+![shell-error-05.png](../../assets/fqa/shell-error-05.png)
 
 #### Q23, spark engine is 2.4.0+cdh6.3.2, python engine was previously lacking pandas, matplotlib upgraded local python to 3.8, but spark does not support python3.8, only supports below 3.6;
 
diff --git a/src/pages/faq/faq_zh.md b/src/pages/faq/faq_zh.md
index d09f5b3..7d65012 100644
--- a/src/pages/faq/faq_zh.md
+++ b/src/pages/faq/faq_zh.md
@@ -14,7 +14,7 @@ at org.eclipse.jetty.servlet.ServletContextHandler\$Context.getSessionCookieConf
 
 具体异常栈:
 
-![linkis-exception-01.png](../Images/Tuning_and_Troubleshooting/linkis-exception-01.png)
+![linkis-exception-01.png](../../assets/fqa/linkis-exception-01.png)
 
 解法:jar包冲突,删除asm-5.0.4.jar;
 
@@ -22,64 +22,64 @@ at org.eclipse.jetty.servlet.ServletContextHandler\$Context.getSessionCookieConf
 
 具体异常栈:
 
-![linkis-exception-02.png](../Images/Tuning_and_Troubleshooting/linkis-exception-02.png)
+![linkis-exception-02.png](../../assets/fqa/linkis-exception-02.png)
 
 
 解法:Linkis-datasource 配置问题导致的,修改linkis.properties  hive.meta开头的三个参数:
 
-![hive-config-01.png](../Images/Tuning_and_Troubleshooting/hive-config-01.png)
+![hive-config-01.png](../../assets/fqa/hive-config-01.png)
 
 
 #### Q4、启动微服务linkis-ps-datasource时,报如下异常ClassNotFoundException HttpClient:
 
 具体异常栈:
 
-![linkis-exception-03.png](../Images/Tuning_and_Troubleshooting/linkis-exception-03.png)
+![linkis-exception-03.png](../../assets/fqa/linkis-exception-03.png)
 
 解法:1.0编译的linkis-metadata-dev-1.0.0.jar存在问题,需要重新编译打包。
 
 #### Q5、点击scriptis-数据库,不返回数据,现象如下:
 
-![page-show-01.png](../Images/Tuning_and_Troubleshooting/page-show-01.png)
+![page-show-01.png](../../assets/fqa/page-show-01.png)
 
 解法:原因hive未授权给hadoop用户,授权数据如下:
 
-![db-config-01.png](../Images/Tuning_and_Troubleshooting/db-config-01.png)
+![db-config-01.png](../../assets/fqa/db-config-01.png)
 
 #### Q6、shell引擎调度执行,页面报 Insufficient resource , requesting available engine timeout,eningeconnmanager的linkis.out,报如下错误:
 
-![linkis-exception-04.png](../Images/Tuning_and_Troubleshooting/linkis-exception-04.png)
+![linkis-exception-04.png](../../assets/fqa/linkis-exception-04.png)
 
 解法:原因hadoop没有创建/appcom/tmp/hadoop/workDir,通过root用户提前创建,然后给hadoop用户授权即可。
 
 #### Q7、shell引擎调度执行时,引擎执行目录报如下错误/bin/java:No such file or directory:
 
-![shell-error-01.png](../Images/Tuning_and_Troubleshooting/shell-error-01.png)
+![shell-error-01.png](../../assets/fqa/shell-error-01.png)
 
 解法:本地java的环境变量有问题,需要对java命令做下符号链接。
 
 #### Q8、hive引擎调度时,报如下错误EngineConnPluginNotFoundException:errorCode:70063
 
-![linkis-exception-05.png](../Images/Tuning_and_Troubleshooting/linkis-exception-05.png)
+![linkis-exception-05.png](../../assets/fqa/linkis-exception-05.png)
 
 解法:安装的时候没有修改对应引擎的Version导致,所以默认插入到db里面的引擎类型为默认版本,而编译出来的版本不是默认版本导致。具体修改步骤:cd /appcom/Install/dss-linkis/linkis/lib/linkis-engineconn-plugins/,修改dist目录下的v2.1.1 目录名 修改为v1.2.1  修改plugin目录下的子目录名2.1.1 为默认版本的1.2.1。如果是Spark需要相应修改dist/v2.4.3 和plugin/2.4.3。最后重启engineplugin服务。
 
 #### Q9、linkis微服务启动后,报如下错误Load balancer does not have available server for client:
 
-![page-show-02.png](../Images/Tuning_and_Troubleshooting/page-show-02.png)
+![page-show-02.png](../../assets/fqa/page-show-02.png)
 
 解法:这个是因为linkis微服务刚启动,还未完成注册,等待1~2分钟,重试即可。
 
 #### Q10、hive引擎调度执行时,报错如下opertion failed NullPointerException:
 
-![linkis-exception-06.png](../Images/Tuning_and_Troubleshooting/linkis-exception-06.png)
+![linkis-exception-06.png](../../assets/fqa/linkis-exception-06.png)
 
 
 解法:服务器缺少环境变量,/etc/profile增加export HIVE_CONF_DIR=/etc/hive/conf;
 
 #### Q11、hive引擎调度时,engineConnManager的错误日志如下method did not exist:SessionHandler:
 
-![linkis-exception-07.png](../Images/Tuning_and_Troubleshooting/linkis-exception-07.png)
+![linkis-exception-07.png](../../assets/fqa/linkis-exception-07.png)
 
 解法:hive引擎lib下,jetty jar包冲突,jetty-security、 jetty-server替换为9.4.20;
 
@@ -158,11 +158,11 @@ at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_181]
 
 #### Q15、python引擎调度时,报如下错误Python proces is not alive:
 
-![linkis-exception-08.png](../Images/Tuning_and_Troubleshooting/linkis-exception-08.png)
+![linkis-exception-08.png](../../assets/fqa/linkis-exception-08.png)
 
 解法:服务器安装anaconda3 包管理器,经过对python调试,发现两个问题:(1)缺乏pandas、matplotlib模块,需要手动安装;(2)新版python引擎执行时,依赖python高版本,首先安装python3,其次做下符号链接(如下图),重启engineplugin服务。
 
-![shell-error-02.png](../Images/Tuning_and_Troubleshooting/shell-error-02.png)
+![shell-error-02.png](../../assets/fqa/shell-error-02.png)
 
 #### Q16、spark引擎执行时,报如下错误NoClassDefFoundError: org/apache/hadoop/hive/ql/io/orc/OrcFile:
 
@@ -214,7 +214,7 @@ at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 
 #### Q17、spark引擎启动时,报queue default is not exists in YARN,具体信息如下:
 
-![linkis-exception-09.png](../Images/Tuning_and_Troubleshooting/linkis-exception-09.png)
+![linkis-exception-09.png](../../assets/fqa/linkis-exception-09.png)
 
 解法:1.0的linkis-resource-manager-dev-1.0.0.jar拉取队列信息时,解析json有兼容问题,官方同学优化后,重新提供新包,jar包路径:/appcom/Install/dss-linkis/linkis/lib/linkis-computation-governance/linkis-cg-linkismanager/。
 
@@ -222,35 +222,35 @@ at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 
 解法:yarn的地址配置迁移DB配置,需要增加如下配置:
  
-![db-config-02.png](../Images/Tuning_and_Troubleshooting/db-config-02.png)
+![db-config-02.png](../../assets/fqa/db-config-02.png)
 
 #### Q19、spark引擎调度时,首次可以执行成功,再次执行报Spark application sc has already stopped, please restart it,具体错误如下:
 
-![page-show-03.png](../Images/Tuning_and_Troubleshooting/page-show-03.png)
+![page-show-03.png](../../assets/fqa/page-show-03.png)
 
 解法:背景是linkis1.0引擎的架构体系有调整,spark session 创建后,为了避免开销、提升执行效率,session是复用的。当我们第一次执行spark.scala时,我们的脚本存在spark.stop(),这个命令会导致新创建的会话被关闭,当再次执行时,会提示会话已关闭,请重启。解决办法:首先所有脚本去掉stop(),其次是执行顺序:先执行default.sql,再执行scalaspark、pythonspark即可。
 
 #### Q20、pythonspark调度执行,报错:initialize python executor failed ClassNotFoundException org.slf4j.impl.StaticLoggerBinder,具体如下:
 
-![linkis-exception-10.png](../Images/Tuning_and_Troubleshooting/linkis-exception-10.png)
+![linkis-exception-10.png](../../assets/fqa/linkis-exception-10.png)
 
 解法:原因是spark服务端缺少 slf4j-log4j12-1.7.25.jar,copy上述jar报到/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/spark/jars。
 
 #### Q21、pythonspark调度执行,报错:initialize python executor failed,submit-version error,具体如下:
 
-![shell-error-03.png](../Images/Tuning_and_Troubleshooting/shell-error-03.png)
+![shell-error-03.png](../../assets/fqa/shell-error-03.png)
 
 解法:原因是linkis1.0 pythonSpark引擎获取spark版本代码有bug,修复如下:
 
-![code-fix-01.png](../Images/Tuning_and_Troubleshooting/code-fix-01.png)
+![code-fix-01.png](../../assets/fqa/code-fix-01.png)
 
 #### Q22、pythonspark调度执行时,报TypeError:an integer is required(got type bytes)(单独执行拉起引擎的命令跑出的),具体如下:
 
-![shell-error-04.png](../Images/Tuning_and_Troubleshooting/shell-error-04.png)
+![shell-error-04.png](../../assets/fqa/shell-error-04.png)
 
 解法:原因是系统spark和python版本不兼容,python是3.8,spark是2.4.0-cdh6.3.2,spark要求python version<=3.6,降低python至3.6,注释文件/opt/cloudera/parcels/CDH/lib/spark/python/lib/pyspark.zip/pyspark/context.py如下几行:
 
-![shell-error-05.png](../Images/Tuning_and_Troubleshooting/shell-error-05.png)
+![shell-error-05.png](../../assets/fqa/shell-error-05.png)
 
 #### Q23、spark引擎是2.4.0+cdh6.3.2,python引擎之前因为缺少pandas、matplotlib升级的本地python到3.8,但是spark还不支持python3.8,仅支持3.6以下;
 

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 29/50: user case

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 18cc1d3d9d4bd21855a10da8292ca39cb3f88677
Author: casionone <ca...@gmail.com>
AuthorDate: Mon Oct 18 10:08:11 2021 +0800

    user case
---
 src/assets/user/97wulian.png                       | Bin 0 -> 28819 bytes
 "src/assets/user/T3\345\207\272\350\241\214.png"   | Bin 0 -> 7258 bytes
 src/assets/user/aisino.png                         | Bin 0 -> 46944 bytes
 src/assets/user/boss.png                           | Bin 0 -> 8386 bytes
 src/assets/user/huazhong.jpg                       | Bin 0 -> 12673 bytes
 src/assets/user/lianchuang.png                     | Bin 0 -> 11438 bytes
 src/assets/user/mobtech..png                       | Bin 0 -> 1829 bytes
 src/assets/user/others/360.png                     | Bin 0 -> 14323 bytes
 ...60\221\347\224\237\351\223\266\350\241\214.jpg" | Bin 0 -> 16640 bytes
 ...70\255\345\233\275\347\224\265\347\247\221.jpg" | Bin 0 -> 5955 bytes
 ...72\221\345\233\276\347\247\221\346\212\200.png" | Bin 0 -> 35242 bytes
 ...72\244\351\200\232\351\223\266\350\241\214.jpg" | Bin 0 -> 8099 bytes
 ...72\254\344\270\234\346\225\260\347\247\221.jpg" | Bin 0 -> 7895 bytes
 .../\345\244\251\347\277\274\344\272\221.png"      | Bin 0 -> 39592 bytes
 ...13\233\345\225\206\351\223\266\350\241\214.jpg" | Bin 0 -> 10462 bytes
 ...31\276\344\277\241\351\223\266\350\241\214.jpg" | Bin 0 -> 6739 bytes
 ...76\216\345\233\242\347\202\271\350\257\204.jpg" | Bin 0 -> 10596 bytes
 ...05\276\350\256\257\350\264\242\347\273\217.jpg" | Bin 0 -> 14500 bytes
 ...24\232\346\235\245\346\261\275\350\275\246.jpg" | Bin 0 -> 7034 bytes
 ...02\256\346\224\277\351\223\266\350\241\214.jpg" | Bin 0 -> 14657 bytes
 src/assets/user/xidian.jpg                         | Bin 0 -> 12475 bytes
 src/assets/user/yitu.png                           | Bin 0 -> 41437 bytes
 src/assets/user/zhongticaipng.png                  | Bin 0 -> 31958 bytes
 ...70\207\347\247\221\351\207\207\347\255\221.png" | Bin 0 -> 2468 bytes
 .../user/\344\270\234\346\226\271\351\200\232.png" | Bin 0 -> 33873 bytes
 ...70\255\345\233\275\347\224\265\344\277\241.png" | Bin 0 -> 6468 bytes
 ...70\255\351\200\232\344\272\221\344\273\223.png" | Bin 0 -> 20138 bytes
 ...34\211\351\231\220\345\205\254\345\217\270.png" | Bin 0 -> 10006 bytes
 ...61\237\345\256\236\351\252\214\345\256\244.png" | Bin 0 -> 13145 bytes
 ...77\241\347\224\250\347\224\237\346\264\273.png" | Bin 0 -> 3978 bytes
 .../user/\345\223\227\345\225\246\345\225\246.jpg" | Bin 0 -> 5990 bytes
 ...34\210\345\244\226\345\220\214\345\255\246.png" | Bin 0 -> 8081 bytes
 "src/assets/user/\345\271\263\345\256\211.png"     | Bin 0 -> 20795 bytes
 ...14\273\344\277\235\347\247\221\346\212\200.png" | Bin 0 -> 2083 bytes
 ...72\221\345\276\231\347\247\221\346\212\200.png" | Bin 0 -> 15448 bytes
 ...03\275\345\244\247\346\225\260\346\215\256.png" | Bin 0 -> 13462 bytes
 ...34\211\351\231\220\345\205\254\345\217\270.png" | Bin 0 -> 29500 bytes
 ...24\265\351\255\202\347\275\221\347\273\234.png" | Bin 0 -> 5553 bytes
 ...41\224\345\255\220\345\210\206\346\234\237.png" | Bin 0 -> 6968 bytes
 ...65\267\345\272\267\345\250\201\350\247\206.png" | Bin 0 -> 22412 bytes
 ...20\206\346\203\263\346\261\275\350\275\246.png" | Bin 0 -> 27672 bytes
 .../user/\347\231\276\346\234\233\344\272\221.png" | Bin 0 -> 24473 bytes
 ...53\213\345\210\233\345\225\206\345\237\216.png" | Bin 0 -> 24213 bytes
 ...72\242\350\261\241\344\272\221\350\205\276.png" | Bin 0 -> 4596 bytes
 ...11\276\344\275\263\347\224\237\346\264\273.jpg" | Bin 0 -> 5444 bytes
 ...20\250\346\221\251\350\200\266\344\272\221.png" | Bin 0 -> 5501 bytes
 ...41\266\347\202\271\350\275\257\344\273\266.png" | Bin 0 -> 8796 bytes
 src/pages/docs/docsdata_en.js                      |  62 +++
 src/pages/docs/{index.vue => docsdata_zh.js}       |  29 +-
 src/pages/docs/index.vue                           | 144 ++---
 src/pages/home/data.js                             | 585 +++++++++++++++++++++
 src/pages/{home.vue => home/index.vue}             |  15 +-
 src/router.js                                      |   2 +-
 53 files changed, 748 insertions(+), 89 deletions(-)

diff --git a/src/assets/user/97wulian.png b/src/assets/user/97wulian.png
new file mode 100644
index 0000000..5b828b1
Binary files /dev/null and b/src/assets/user/97wulian.png differ
diff --git "a/src/assets/user/T3\345\207\272\350\241\214.png" "b/src/assets/user/T3\345\207\272\350\241\214.png"
new file mode 100644
index 0000000..1491def
Binary files /dev/null and "b/src/assets/user/T3\345\207\272\350\241\214.png" differ
diff --git a/src/assets/user/aisino.png b/src/assets/user/aisino.png
new file mode 100644
index 0000000..73b7589
Binary files /dev/null and b/src/assets/user/aisino.png differ
diff --git a/src/assets/user/boss.png b/src/assets/user/boss.png
new file mode 100644
index 0000000..17bb2b2
Binary files /dev/null and b/src/assets/user/boss.png differ
diff --git a/src/assets/user/huazhong.jpg b/src/assets/user/huazhong.jpg
new file mode 100644
index 0000000..70e557f
Binary files /dev/null and b/src/assets/user/huazhong.jpg differ
diff --git a/src/assets/user/lianchuang.png b/src/assets/user/lianchuang.png
new file mode 100644
index 0000000..1320cbe
Binary files /dev/null and b/src/assets/user/lianchuang.png differ
diff --git a/src/assets/user/mobtech..png b/src/assets/user/mobtech..png
new file mode 100644
index 0000000..0ba017e
Binary files /dev/null and b/src/assets/user/mobtech..png differ
diff --git a/src/assets/user/others/360.png b/src/assets/user/others/360.png
new file mode 100644
index 0000000..74b5d13
Binary files /dev/null and b/src/assets/user/others/360.png differ
diff --git "a/src/assets/user/others/\344\270\255\345\233\275\346\260\221\347\224\237\351\223\266\350\241\214.jpg" "b/src/assets/user/others/\344\270\255\345\233\275\346\260\221\347\224\237\351\223\266\350\241\214.jpg"
new file mode 100644
index 0000000..e5fb3b5
Binary files /dev/null and "b/src/assets/user/others/\344\270\255\345\233\275\346\260\221\347\224\237\351\223\266\350\241\214.jpg" differ
diff --git "a/src/assets/user/others/\344\270\255\345\233\275\347\224\265\347\247\221.jpg" "b/src/assets/user/others/\344\270\255\345\233\275\347\224\265\347\247\221.jpg"
new file mode 100644
index 0000000..589617f
Binary files /dev/null and "b/src/assets/user/others/\344\270\255\345\233\275\347\224\265\347\247\221.jpg" differ
diff --git "a/src/assets/user/others/\344\272\221\345\233\276\347\247\221\346\212\200.png" "b/src/assets/user/others/\344\272\221\345\233\276\347\247\221\346\212\200.png"
new file mode 100644
index 0000000..249aaaa
Binary files /dev/null and "b/src/assets/user/others/\344\272\221\345\233\276\347\247\221\346\212\200.png" differ
diff --git "a/src/assets/user/others/\344\272\244\351\200\232\351\223\266\350\241\214.jpg" "b/src/assets/user/others/\344\272\244\351\200\232\351\223\266\350\241\214.jpg"
new file mode 100644
index 0000000..c2232c7
Binary files /dev/null and "b/src/assets/user/others/\344\272\244\351\200\232\351\223\266\350\241\214.jpg" differ
diff --git "a/src/assets/user/others/\344\272\254\344\270\234\346\225\260\347\247\221.jpg" "b/src/assets/user/others/\344\272\254\344\270\234\346\225\260\347\247\221.jpg"
new file mode 100644
index 0000000..7a98336
Binary files /dev/null and "b/src/assets/user/others/\344\272\254\344\270\234\346\225\260\347\247\221.jpg" differ
diff --git "a/src/assets/user/others/\345\244\251\347\277\274\344\272\221.png" "b/src/assets/user/others/\345\244\251\347\277\274\344\272\221.png"
new file mode 100644
index 0000000..8973744
Binary files /dev/null and "b/src/assets/user/others/\345\244\251\347\277\274\344\272\221.png" differ
diff --git "a/src/assets/user/others/\346\213\233\345\225\206\351\223\266\350\241\214.jpg" "b/src/assets/user/others/\346\213\233\345\225\206\351\223\266\350\241\214.jpg"
new file mode 100644
index 0000000..8f3d41a
Binary files /dev/null and "b/src/assets/user/others/\346\213\233\345\225\206\351\223\266\350\241\214.jpg" differ
diff --git "a/src/assets/user/others/\347\231\276\344\277\241\351\223\266\350\241\214.jpg" "b/src/assets/user/others/\347\231\276\344\277\241\351\223\266\350\241\214.jpg"
new file mode 100644
index 0000000..e338788
Binary files /dev/null and "b/src/assets/user/others/\347\231\276\344\277\241\351\223\266\350\241\214.jpg" differ
diff --git "a/src/assets/user/others/\347\276\216\345\233\242\347\202\271\350\257\204.jpg" "b/src/assets/user/others/\347\276\216\345\233\242\347\202\271\350\257\204.jpg"
new file mode 100644
index 0000000..33fda33
Binary files /dev/null and "b/src/assets/user/others/\347\276\216\345\233\242\347\202\271\350\257\204.jpg" differ
diff --git "a/src/assets/user/others/\350\205\276\350\256\257\350\264\242\347\273\217.jpg" "b/src/assets/user/others/\350\205\276\350\256\257\350\264\242\347\273\217.jpg"
new file mode 100644
index 0000000..d409f43
Binary files /dev/null and "b/src/assets/user/others/\350\205\276\350\256\257\350\264\242\347\273\217.jpg" differ
diff --git "a/src/assets/user/others/\350\224\232\346\235\245\346\261\275\350\275\246.jpg" "b/src/assets/user/others/\350\224\232\346\235\245\346\261\275\350\275\246.jpg"
new file mode 100644
index 0000000..c1df2ac
Binary files /dev/null and "b/src/assets/user/others/\350\224\232\346\235\245\346\261\275\350\275\246.jpg" differ
diff --git "a/src/assets/user/others/\351\202\256\346\224\277\351\223\266\350\241\214.jpg" "b/src/assets/user/others/\351\202\256\346\224\277\351\223\266\350\241\214.jpg"
new file mode 100644
index 0000000..02356c9
Binary files /dev/null and "b/src/assets/user/others/\351\202\256\346\224\277\351\223\266\350\241\214.jpg" differ
diff --git a/src/assets/user/xidian.jpg b/src/assets/user/xidian.jpg
new file mode 100644
index 0000000..dc37326
Binary files /dev/null and b/src/assets/user/xidian.jpg differ
diff --git a/src/assets/user/yitu.png b/src/assets/user/yitu.png
new file mode 100644
index 0000000..58aaa3f
Binary files /dev/null and b/src/assets/user/yitu.png differ
diff --git a/src/assets/user/zhongticaipng.png b/src/assets/user/zhongticaipng.png
new file mode 100644
index 0000000..c343ba5
Binary files /dev/null and b/src/assets/user/zhongticaipng.png differ
diff --git "a/src/assets/user/\344\270\207\347\247\221\351\207\207\347\255\221.png" "b/src/assets/user/\344\270\207\347\247\221\351\207\207\347\255\221.png"
new file mode 100644
index 0000000..35f056c
Binary files /dev/null and "b/src/assets/user/\344\270\207\347\247\221\351\207\207\347\255\221.png" differ
diff --git "a/src/assets/user/\344\270\234\346\226\271\351\200\232.png" "b/src/assets/user/\344\270\234\346\226\271\351\200\232.png"
new file mode 100644
index 0000000..72fde94
Binary files /dev/null and "b/src/assets/user/\344\270\234\346\226\271\351\200\232.png" differ
diff --git "a/src/assets/user/\344\270\255\345\233\275\347\224\265\344\277\241.png" "b/src/assets/user/\344\270\255\345\233\275\347\224\265\344\277\241.png"
new file mode 100644
index 0000000..f34cc37
Binary files /dev/null and "b/src/assets/user/\344\270\255\345\233\275\347\224\265\344\277\241.png" differ
diff --git "a/src/assets/user/\344\270\255\351\200\232\344\272\221\344\273\223.png" "b/src/assets/user/\344\270\255\351\200\232\344\272\221\344\273\223.png"
new file mode 100644
index 0000000..7a27229
Binary files /dev/null and "b/src/assets/user/\344\270\255\351\200\232\344\272\221\344\273\223.png" differ
diff --git "a/src/assets/user/\344\270\255\351\200\232\346\234\215\345\205\254\344\274\227\344\277\241\346\201\257\350\202\241\344\273\275\346\234\211\351\231\220\345\205\254\345\217\270.png" "b/src/assets/user/\344\270\255\351\200\232\346\234\215\345\205\254\344\274\227\344\277\241\346\201\257\350\202\241\344\273\275\346\234\211\351\231\220\345\205\254\345\217\270.png"
new file mode 100644
index 0000000..8946372
Binary files /dev/null and "b/src/assets/user/\344\270\255\351\200\232\346\234\215\345\205\254\344\274\227\344\277\241\346\201\257\350\202\241\344\273\275\346\234\211\351\231\220\345\205\254\345\217\270.png" differ
diff --git "a/src/assets/user/\344\271\213\346\261\237\345\256\236\351\252\214\345\256\244.png" "b/src/assets/user/\344\271\213\346\261\237\345\256\236\351\252\214\345\256\244.png"
new file mode 100644
index 0000000..1fbe9ce
Binary files /dev/null and "b/src/assets/user/\344\271\213\346\261\237\345\256\236\351\252\214\345\256\244.png" differ
diff --git "a/src/assets/user/\344\277\241\347\224\250\347\224\237\346\264\273.png" "b/src/assets/user/\344\277\241\347\224\250\347\224\237\346\264\273.png"
new file mode 100644
index 0000000..8a767b1
Binary files /dev/null and "b/src/assets/user/\344\277\241\347\224\250\347\224\237\346\264\273.png" differ
diff --git "a/src/assets/user/\345\223\227\345\225\246\345\225\246.jpg" "b/src/assets/user/\345\223\227\345\225\246\345\225\246.jpg"
new file mode 100644
index 0000000..3d94cd0
Binary files /dev/null and "b/src/assets/user/\345\223\227\345\225\246\345\225\246.jpg" differ
diff --git "a/src/assets/user/\345\234\210\345\244\226\345\220\214\345\255\246.png" "b/src/assets/user/\345\234\210\345\244\226\345\220\214\345\255\246.png"
new file mode 100644
index 0000000..fc623d4
Binary files /dev/null and "b/src/assets/user/\345\234\210\345\244\226\345\220\214\345\255\246.png" differ
diff --git "a/src/assets/user/\345\271\263\345\256\211.png" "b/src/assets/user/\345\271\263\345\256\211.png"
new file mode 100644
index 0000000..4895178
Binary files /dev/null and "b/src/assets/user/\345\271\263\345\256\211.png" differ
diff --git "a/src/assets/user/\345\271\263\345\256\211\345\214\273\344\277\235\347\247\221\346\212\200.png" "b/src/assets/user/\345\271\263\345\256\211\345\214\273\344\277\235\347\247\221\346\212\200.png"
new file mode 100644
index 0000000..156be44
Binary files /dev/null and "b/src/assets/user/\345\271\263\345\256\211\345\214\273\344\277\235\347\247\221\346\212\200.png" differ
diff --git "a/src/assets/user/\345\271\277\345\267\236\344\272\221\345\276\231\347\247\221\346\212\200.png" "b/src/assets/user/\345\271\277\345\267\236\344\272\221\345\276\231\347\247\221\346\212\200.png"
new file mode 100644
index 0000000..6783b0f
Binary files /dev/null and "b/src/assets/user/\345\271\277\345\267\236\344\272\221\345\276\231\347\247\221\346\212\200.png" differ
diff --git "a/src/assets/user/\346\210\220\351\203\275\345\244\247\346\225\260\346\215\256.png" "b/src/assets/user/\346\210\220\351\203\275\345\244\247\346\225\260\346\215\256.png"
new file mode 100644
index 0000000..f6a7e4e
Binary files /dev/null and "b/src/assets/user/\346\210\220\351\203\275\345\244\247\346\225\260\346\215\256.png" differ
diff --git "a/src/assets/user/\346\213\233\350\201\224\346\266\210\350\264\271\351\207\221\350\236\215\346\234\211\351\231\220\345\205\254\345\217\270.png" "b/src/assets/user/\346\213\233\350\201\224\346\266\210\350\264\271\351\207\221\350\236\215\346\234\211\351\231\220\345\205\254\345\217\270.png"
new file mode 100644
index 0000000..7a39d07
Binary files /dev/null and "b/src/assets/user/\346\213\233\350\201\224\346\266\210\350\264\271\351\207\221\350\236\215\346\234\211\351\231\220\345\205\254\345\217\270.png" differ
diff --git "a/src/assets/user/\346\235\255\345\267\236\347\224\265\351\255\202\347\275\221\347\273\234.png" "b/src/assets/user/\346\235\255\345\267\236\347\224\265\351\255\202\347\275\221\347\273\234.png"
new file mode 100644
index 0000000..bc61646
Binary files /dev/null and "b/src/assets/user/\346\235\255\345\267\236\347\224\265\351\255\202\347\275\221\347\273\234.png" differ
diff --git "a/src/assets/user/\346\241\224\345\255\220\345\210\206\346\234\237.png" "b/src/assets/user/\346\241\224\345\255\220\345\210\206\346\234\237.png"
new file mode 100644
index 0000000..3ff45b8
Binary files /dev/null and "b/src/assets/user/\346\241\224\345\255\220\345\210\206\346\234\237.png" differ
diff --git "a/src/assets/user/\346\265\267\345\272\267\345\250\201\350\247\206.png" "b/src/assets/user/\346\265\267\345\272\267\345\250\201\350\247\206.png"
new file mode 100644
index 0000000..a961cc4
Binary files /dev/null and "b/src/assets/user/\346\265\267\345\272\267\345\250\201\350\247\206.png" differ
diff --git "a/src/assets/user/\347\220\206\346\203\263\346\261\275\350\275\246.png" "b/src/assets/user/\347\220\206\346\203\263\346\261\275\350\275\246.png"
new file mode 100644
index 0000000..3c0c20f
Binary files /dev/null and "b/src/assets/user/\347\220\206\346\203\263\346\261\275\350\275\246.png" differ
diff --git "a/src/assets/user/\347\231\276\346\234\233\344\272\221.png" "b/src/assets/user/\347\231\276\346\234\233\344\272\221.png"
new file mode 100644
index 0000000..90395c6
Binary files /dev/null and "b/src/assets/user/\347\231\276\346\234\233\344\272\221.png" differ
diff --git "a/src/assets/user/\347\253\213\345\210\233\345\225\206\345\237\216.png" "b/src/assets/user/\347\253\213\345\210\233\345\225\206\345\237\216.png"
new file mode 100644
index 0000000..ca71850
Binary files /dev/null and "b/src/assets/user/\347\253\213\345\210\233\345\225\206\345\237\216.png" differ
diff --git "a/src/assets/user/\347\272\242\350\261\241\344\272\221\350\205\276.png" "b/src/assets/user/\347\272\242\350\261\241\344\272\221\350\205\276.png"
new file mode 100644
index 0000000..bd54887
Binary files /dev/null and "b/src/assets/user/\347\272\242\350\261\241\344\272\221\350\205\276.png" differ
diff --git "a/src/assets/user/\350\211\276\344\275\263\347\224\237\346\264\273.jpg" "b/src/assets/user/\350\211\276\344\275\263\347\224\237\346\264\273.jpg"
new file mode 100644
index 0000000..ab32413
Binary files /dev/null and "b/src/assets/user/\350\211\276\344\275\263\347\224\237\346\264\273.jpg" differ
diff --git "a/src/assets/user/\350\220\250\346\221\251\350\200\266\344\272\221.png" "b/src/assets/user/\350\220\250\346\221\251\350\200\266\344\272\221.png"
new file mode 100644
index 0000000..5a39dff
Binary files /dev/null and "b/src/assets/user/\350\220\250\346\221\251\350\200\266\344\272\221.png" differ
diff --git "a/src/assets/user/\351\241\266\347\202\271\350\275\257\344\273\266.png" "b/src/assets/user/\351\241\266\347\202\271\350\275\257\344\273\266.png"
new file mode 100644
index 0000000..8e80dd0
Binary files /dev/null and "b/src/assets/user/\351\241\266\347\202\271\350\275\257\344\273\266.png" differ
diff --git a/src/pages/docs/docsdata_en.js b/src/pages/docs/docsdata_en.js
new file mode 100644
index 0000000..b07cd7a
--- /dev/null
+++ b/src/pages/docs/docsdata_en.js
@@ -0,0 +1,62 @@
+const data = {
+    info: {},
+    list: [
+        {
+            title: 'Deployment',
+            link: '/docs/deploy/linkis',
+            children: [{
+                title: 'Quick Deploy',
+                link: '/docs/deploy/linkis',
+            }, {
+                title: 'EngineConnPlugin installation',
+                link: '/docs/deploy/engins',
+            }, {
+                title: 'Cluster Deployment',
+                link: '/docs/deploy/distributed',
+            }, {
+                title: 'Installation Hierarchical Structure',
+                link: '/docs/deploy/structure',
+            }]
+        },
+        {
+            title: 'User Manual',
+            link: '/docs/manual/UserManual',
+            children: [
+                {
+                    title: 'User Manual',
+                    link: '/docs/manual/UserManual',
+                }, {
+                    title: 'How To Use',
+                    link: '/docs/manual/HowToUse',
+                }, {
+                    title: 'Console User Manual',
+                    link: '/docs/manual/ConsoleUserManual',
+                }, {
+                    title: 'Linkis-Cli Usage',
+                    link: '/docs/manual/CliManual',
+                }]
+
+
+        },
+        {
+            title: 'Architecture',
+            link: '/docs/architecture/DifferenceBetween1.0&0.x',
+            children: [
+                {
+                    title: 'Difference Between1.0 And 0.x',
+                    link: '/docs/architecture/DifferenceBetween1.0&0.x',
+                },
+                {
+                    title: 'Job Submission Preparation',
+                    link: '/docs/architecture/JobSubmission',
+                }, {
+                    title: 'How To Add An EngineConn',
+                    link: '/docs/architecture/AddEngineConn',
+                }]
+
+
+        }
+    ]
+}
+
+export default data
diff --git a/src/pages/docs/index.vue b/src/pages/docs/docsdata_zh.js
similarity index 67%
copy from src/pages/docs/index.vue
copy to src/pages/docs/docsdata_zh.js
index d40fce9..eb43a41 100644
--- a/src/pages/docs/index.vue
+++ b/src/pages/docs/docsdata_zh.js
@@ -1,19 +1,6 @@
-<template>
-    <div class="ctn-block reading-area">
-        <main class="main-content">
-            <router-view></router-view>
-        </main>
-        <div class="side-bar">
-            <router-link :to="doc.link" class="bar-item" v-for="(doc,index) in docs" :key="index">{{doc.title}}
-                <router-link :to="children.link" class="bar-item" v-for="(children,cindex) in doc.children"
-                             :key="cindex">{{children.title}}
-                </router-link>
-            </router-link>
-        </div>
-    </div>
-</template>
-<script setup>
-    const docs = [
+const data = {
+    info: {},
+    list: [
         {
             title: '部署文档',
             link: '/docs/deploy/linkis',
@@ -31,12 +18,6 @@
                 link: '/docs/deploy/structure',
             }]
         },
-
-        //   - [用户手册](User_Manual/README.md)
-        //   - [Linkis1.0 使用的几种方式](User_Manual/How_To_Use_Linkis.md)
-        //   - [Linkis1.0 用户使用文档](User_Manual/Linkis1.0用户使用文档.md)
-        // - [Linkis1.0 管理台使用文档](User_Manual/Linkis_Console_User_Manual.md)
-
         {
             title: '用户手册',
             link: '/docs/manual/UserManual',
@@ -76,4 +57,6 @@
 
         }
     ]
-</script>
+}
+
+export default data
diff --git a/src/pages/docs/index.vue b/src/pages/docs/index.vue
index d40fce9..f27ac5a 100644
--- a/src/pages/docs/index.vue
+++ b/src/pages/docs/index.vue
@@ -4,7 +4,7 @@
             <router-view></router-view>
         </main>
         <div class="side-bar">
-            <router-link :to="doc.link" class="bar-item" v-for="(doc,index) in docs" :key="index">{{doc.title}}
+            <router-link :to="doc.link" class="bar-item" v-for="(doc,index) in jsonData.list" :key="index">{{doc.title}}
                 <router-link :to="children.link" class="bar-item" v-for="(children,cindex) in doc.children"
                              :key="cindex">{{children.title}}
                 </router-link>
@@ -12,68 +12,94 @@
         </div>
     </div>
 </template>
-<script setup>
-    const docs = [
-        {
-            title: '部署文档',
-            link: '/docs/deploy/linkis',
-            children: [{
-                title: '快速部署 Linkis1.0',
-                link: '/docs/deploy/linkis',
-            }, {
-                title: '快速安装 EngineConnPlugin 引擎插件',
-                link: '/docs/deploy/engins',
-            }, {
-                title: 'Linkis1.0 分布式部署手册',
-                link: '/docs/deploy/distributed',
-            }, {
-                title: 'Linkis1.0 安装包目录层级结构详解',
-                link: '/docs/deploy/structure',
-            }]
+<script >
+    import utils  from "../../js/utils";
+    import  list_en from "./docsdata_en.js";
+    import  list_zh from "./docsdata_zh.js";
+
+    export default {
+        data() {
+            return {
+                utils,
+                "jsonData": null
+            }
         },
+        created() {
+            const lang = localStorage.getItem('locale');
+            if (lang === "en") {
+                this.jsonData = list_en;
+            } else {
+                this.jsonData = list_zh;
+            }
+        }
 
-        //   - [用户手册](User_Manual/README.md)
-        //   - [Linkis1.0 使用的几种方式](User_Manual/How_To_Use_Linkis.md)
-        //   - [Linkis1.0 用户使用文档](User_Manual/Linkis1.0用户使用文档.md)
-        // - [Linkis1.0 管理台使用文档](User_Manual/Linkis_Console_User_Manual.md)
+    }
+</script>
 
-        {
-            title: '用户手册',
-            link: '/docs/manual/UserManual',
-            children: [
-                {
-                    title: '用户使用文档',
-                    link: '/docs/manual/UserManual',
-                }, {
-                    title: '使用的几种方式',
-                    link: '/docs/manual/HowToUse',
-                }, {
-                    title: '管理台使用文档',
-                    link: '/docs/manual/ConsoleUserManual',
-                }, {
-                    title: 'Linkis-Cli使用文档',
-                    link: '/docs/manual/CliManual',
-                }]
 
 
-        },
-        {
-            title: '架构文档',
-            link: '/docs/architecture/DifferenceBetween1.0&0.x',
-            children: [
-                {
-                    title: 'Linkis1.0与Linkis0.X的区别简述',
-                    link: '/docs/architecture/DifferenceBetween1.0&0.x',
-                },
-                {
-                    title: 'Job提交准备执行流程',
-                    link: '/docs/architecture/JobSubmission',
-                }, {
-                    title: 'EngineConn新增流程',
-                    link: '/docs/architecture/AddEngineConn',
-                }]
+<!--<script setup>-->
+<!--    const docs = [-->
+<!--        {-->
+<!--            title: '部署文档',-->
+<!--            link: '/docs/deploy/linkis',-->
+<!--            children: [{-->
+<!--                title: '快速部署 Linkis1.0',-->
+<!--                link: '/docs/deploy/linkis',-->
+<!--            }, {-->
+<!--                title: '快速安装 EngineConnPlugin 引擎插件',-->
+<!--                link: '/docs/deploy/engins',-->
+<!--            }, {-->
+<!--                title: 'Linkis1.0 分布式部署手册',-->
+<!--                link: '/docs/deploy/distributed',-->
+<!--            }, {-->
+<!--                title: 'Linkis1.0 安装包目录层级结构详解',-->
+<!--                link: '/docs/deploy/structure',-->
+<!--            }]-->
+<!--        },-->
 
+<!--        //   - [用户手册](User_Manual/README.md)-->
+<!--        //   - [Linkis1.0 使用的几种方式](User_Manual/How_To_Use_Linkis.md)-->
+<!--        //   - [Linkis1.0 用户使用文档](User_Manual/Linkis1.0用户使用文档.md)-->
+<!--        // - [Linkis1.0 管理台使用文档](User_Manual/Linkis_Console_User_Manual.md)-->
 
-        }
-    ]
-</script>
+<!--        {-->
+<!--            title: '用户手册',-->
+<!--            link: '/docs/manual/UserManual',-->
+<!--            children: [-->
+<!--                {-->
+<!--                    title: '用户使用文档',-->
+<!--                    link: '/docs/manual/UserManual',-->
+<!--                }, {-->
+<!--                    title: '使用的几种方式',-->
+<!--                    link: '/docs/manual/HowToUse',-->
+<!--                }, {-->
+<!--                    title: '管理台使用文档',-->
+<!--                    link: '/docs/manual/ConsoleUserManual',-->
+<!--                }, {-->
+<!--                    title: 'Linkis-Cli使用文档',-->
+<!--                    link: '/docs/manual/CliManual',-->
+<!--                }]-->
+
+
+<!--        },-->
+<!--        {-->
+<!--            title: '架构文档',-->
+<!--            link: '/docs/architecture/DifferenceBetween1.0&0.x',-->
+<!--            children: [-->
+<!--                {-->
+<!--                    title: 'Linkis1.0与Linkis0.X的区别简述',-->
+<!--                    link: '/docs/architecture/DifferenceBetween1.0&0.x',-->
+<!--                },-->
+<!--                {-->
+<!--                    title: 'Job提交准备执行流程',-->
+<!--                    link: '/docs/architecture/JobSubmission',-->
+<!--                }, {-->
+<!--                    title: 'EngineConn新增流程',-->
+<!--                    link: '/docs/architecture/AddEngineConn',-->
+<!--                }]-->
+
+
+<!--        }-->
+<!--    ]-->
+<!--</script>-->
diff --git a/src/pages/home/data.js b/src/pages/home/data.js
new file mode 100644
index 0000000..8e70ccd
--- /dev/null
+++ b/src/pages/home/data.js
@@ -0,0 +1,585 @@
+const data = [
+    {
+        "auther": "liangqilang ",
+        "company": "Directly hired by Boss",
+        "location": "Beijing, China",
+        "contact": "zhuhui@kanzhun.com",
+        "business scenario": "As a data middleware standard version service platform",
+        "公司": "Boss直聘",
+        "地点": "中国北京",
+        "联系方式": "zhuhui@kanzhun.com",
+        "业务场景": "作为数据中间件标准版服务平台",
+    },
+    {
+        "auther": "zhanghaicheng1 ",
+        "company": "Aisino",
+        "location": "Hefei,China",
+        "contact": "ah.zhanghaicheng@aisino.com",
+        "business scenario": "Replace IQL products",
+        "公司": "Aisino",
+        "地点": "中国合肥",
+        "联系方式": "ah.zhanghaicheng@aisino.com",
+        "业务场景": "替代IQL产品"
+    },
+    {
+        "auther": "JavaMrYang ",
+        "company": "Beijing Lianchuang Zhirong",
+        "location": "Changsha Hunan",
+        "contact": "liu_yang@uisftech.com",
+        "business scenario": "Provide support for big data service platform",
+        "公司": "北京联创智融",
+        "地点": "湖南长沙",
+        "联系方式": "liu_yang@uisftech.com",
+        "业务场景": "为大数据服务平台提供支持",
+
+    },
+    {
+        "auther": "1245053895 ",
+        "company": "Intelligent Manufacturing and Industrial Big Data Research Center of Xidian University",
+        "location": "Xi'an, Shaanxi",
+        "contact": "109738503@qq.com",
+        "business scenario": "As a standard version of data middleware.",
+        "公司": "西安电子科技大学智能制造与工业大数据研究中心",
+        "地点": "陕西西安",
+        "联系方式": "109738503@qq.com",
+        "业务场景": "作为数据中间件标准版。",
+    },
+    {
+        "auther": "RustRw ",
+        "company": "Yitu Technology",
+        "location": "Shanghai",
+        "contact": "fei.xu2@yitu-inc.com",
+        "business scenario": "unified data development platform,under investigation.",
+        "公司": "依图科技",
+        "地点": "上海",
+        "联系方式": "fei.xu2@yitu-inc.com",
+        "业务场景": " 统一数据开发平台, 调研中."
+    },
+    {
+        "auther": "thinkborm ",
+        "company": "g7 Internet of Things",
+        "location": "Chengdu",
+        "contact": "bigdata@g7.com.cn",
+        "business scenario": "As a workbench for data developers and analysts.",
+        "公司": "g7物联网",
+        "地点": " 成都",
+        "联系方式": " bigdata@g7.com.cn",
+        "业务场景": " 作为数据开发人员、分析人员的工作台。",
+
+    },
+    {
+        "auther": "sMallFAt6 ",
+        "company": "National High Performance Computing Center of Huazhong University of Science and Technology",
+        "location": "Wuhan",
+        "contact": "2374007549@qq.com",
+        "business scenario": "used to build a big data analysis platform",
+        "公司": "华中科技大学国家高性能计算中心",
+        "地点": " 武汉",
+        "联系方式": " 2374007549@qq.com",
+        "业务场景": " 用于搭建大数据分析平台",
+
+
+    },
+    {
+        "auther": "ZengZeHua ",
+        "company": "National High Performance Computing Center of Huazhong University of Science and Technology",
+        "location": "Wuhan",
+        "contact": "1979057506@qq.com",
+        "business scenario": "used to build a big data analysis platform",
+        "公司": "华中科技大学国家高性能计算中心",
+        "地点": " 武汉",
+        "联系方式": "1979057506@qq.com",
+        "业务场景": " 用于搭建大数据分析平台",
+    },
+    {
+        "auther": "liuzhimindluter ",
+        "company": "China Sports Lottery Technology",
+        "location": "Beijing",
+        "contact": "252513499@qq.com",
+        "business scenario": "Hope to achieve a more friendly multi-tenant development scenario",
+        "公司": "中体彩科技",
+        "地点": "北京",
+        "联系方式": "252513499@qq.com",
+        "业务场景": "希望实现更友好的多租户开发场景",
+    },
+    {
+        "auther": "lkhuang ",
+        "company": "Vanke Acquisition and Construction",
+        "location": "Shenzhen",
+        "contact": "kithlk@163.com",
+        "business scenario": "unified data analysis platform testing and use;",
+        "公司": "万科采筑",
+        "地点": "深圳",
+        "联系方式": "kithlk@163.com",
+        "业务场景": " 统一数据分析平台,测试使用中",
+    },
+    {
+        "auther": "yunhao2wei ",
+        "company": "iquanwai",
+        "location": "Shanghai",
+        "contact": "yunhao2wei@163.com",
+        "business scenario": "as a workbench for data analysts and BI personnel",
+        "公司": "圈外同学",
+        "地点": "上海",
+        "联系方式": "yunhao2wei@163.com",
+        "业务场景": "作为数据分析人员、BI人员的工作台",
+
+
+    },
+    {
+        "auther": "felixlin47 ",
+        "company": "Fujian Apex Software Co.Ltd.",
+        "location": "Fuzhou",
+        "contact": "76603008@qq.com",
+        "business scenario": "Big data application scheduling platform under investigation...",
+        "公司": "顶点软件",
+        "地点": "福州",
+        "联系方式": "76603008@qq.com",
+        "业务场景": "大数据应用调度平台,调研中...",
+    },
+    {
+        "auther": "cyofeiyue ",
+        "company": "Nanjing Leading Technology /T3 Travel",
+        "location": "Nanjing",
+        "contact": "372003348@qq.com",
+        "business scenario": "big data analysis query middleware research",
+        "公司": "南京领行科技/T3出行",
+        "地点": "南京",
+        "联系方式": "372003348@qq.com",
+        "业务场景": "大数据分析查询中间件调研",
+
+
+    },
+    {
+        "auther": "feiyizhang ",
+        "company": "Beijing Red Elephant Yunteng",
+        "location": "Beijing",
+        "contact": "95320534@qq.com",
+        "business scenario": "Banking business solution research",
+        "公司": "北京红象云腾",
+        "地点": "北京",
+        "联系方式": "953201534@qq.com",
+        "业务场景": "银行业务解决方案调研",
+
+
+    },
+    {
+        "auther": "Just-do-it-Fan ",
+        "company": "Wah Lala",
+        "location": "Beijing",
+        "contact": "876103537@qq.com",
+        "business scenario": "middleware of big data unified query platform",
+        "公司": "哗啦啦",
+        "地点": "北京",
+        "联系方式": "876103537@qq.com",
+        "业务场景": "大数据统一查询平台中间件",
+
+
+    },
+    {
+        "auther": "weipengffk ",
+        "company": "Lichuang Mall",
+        "location": "Shenzhen China",
+        "contact": "1310503090@qq.com",
+        "business Scenario": "Research on Data Center Program",
+        "公司": "立创商城",
+        "地点": "中国深圳",
+        "联系方式": "1310503090@qq.com",
+        "业务场景": "数据中台方案调研",
+
+
+    },
+    {
+        "auther": "wangshu-zhouyunfan ",
+        "company": "Shu Select Cloud",
+        "location": "Hangzhou",
+        "contact": "18045173406@163.com",
+        "business scenario": "external technical output of the data center",
+        "公司": "数择云",
+        "地点": "杭州",
+        "联系方式": "18045173406@163.com",
+        "业务场景": "数据中台对外技术输出",
+
+
+    },
+    {
+        "auther": "robinzyx ",
+        "company": "CLP Wanwei",
+        "location": "Lanzhou",
+        "contact": "18152063386@189.com",
+        "business scenario": "data-centered government affairs  medical and other industry solutions",
+        "公司": "中电万维",
+        "地点": "兰州",
+        "联系方式": "18152063386@189.com",
+        "业务场景": "数据中台政务、医疗等行业解决方案",
+    },
+    {
+        "auther": "77954309 ",
+        "company": "Knowing Wisdom",
+        "location": "Beijing",
+        "contact": "77954309@qq.com",
+        "business scenario": "data center construction of computing engine, various middle station services.",
+        "公司": "知因智慧",
+        "地点": "北京",
+        "联系方式": "77954309@qq.com",
+        "业务场景": "数据中台,构建计算引擎,各种中台服务。"
+
+
+    },
+    {
+        "auther": "tgh-621 ",
+        "company": "Chengdu Big Data Industry Research Institute",
+        "location": "Chengdu",
+        "contact": "93403464@qq.com",
+        "business scenario": "data center",
+        "公司": "成都大数据产业研究院",
+        "地点": "成都",
+        "联系方式": "93403464@qq.com",
+        "业务场景": "数据中台",
+
+
+    },
+    {
+        "auther": "zhihui-ge ",
+        "company": "Hangzhou Electric Soul Network",
+        "location": "Hangzhou",
+        "contact": "2972333955@qq.com",
+        "business scenario": "used to build a big data analysis platform",
+        "公司": "杭州电魂网络",
+        "地点": "杭州",
+        "联系方式": "2972333955@qq.com",
+        "业务场景": "用于搭建大数据分析平台",
+
+
+    },
+    {
+        "auther": "chagsheg ",
+        "company": "Beiming Digital",
+        "location": "Guangzhou",
+        "contact": "1420952288@qq.com",
+        "business scenario": "data center",
+        "公司": "北明数科",
+        "地点": "广州",
+        "联系方式": "1420952288@qq.com",
+        "业务场景": "数据中台",
+
+
+    },
+    {
+        "auther": "zhugekaoyu ",
+        "company": "Zhongtong Yuncang",
+        "location": "Hangzhou",
+        "contact": "1756801194@qq.com",
+        "business scenario": "research is used to replace CDH",
+        "公司": "中通云仓",
+        "地点": "杭州",
+        "联系方式": "1756801194@qq.com",
+        "业务场景": "调研用于替代CDH",
+
+
+    },
+    {
+        "auther": "YxiangJ ",
+        "company": "Dico",
+        "location": "Zhengzhou",
+        "contact": "284953505@qq.com",
+        "business scenario": "data center",
+        "公司": "迪科",
+        "地点": "郑州",
+        "联系方式": "284953505@qq.com",
+        "业务场景": "数据中台",
+
+
+    },
+    {
+        "auther": "zhangchao930216 ",
+        "company": "Ping An Medical Insurance Technology",
+        "location": "Shanghai",
+        "contact": "790076723@qq.com",
+        "business scenario": "data center",
+        "公司": "平安医保科技",
+        "地点": "上海",
+        "联系方式": "790076723@qq.com",
+        "业务场景": "数据中台",
+
+
+    },
+    {
+        "auther": "xccoder ",
+        "company": "Hikvision",
+        "location": "Hangzhou China",
+        "contact": "1843107737@qq.com",
+        "business scenario": "Based on powerful data middleware to build a data development scenario for the Internet of Things industry.",
+        "公司": "海康威视",
+        "地点": "中国杭州",
+        "联系方式": "1843107737@qq.com",
+        "业务场景": "基于强大的数据中间件构建物联网行业的数据开发场景。",
+
+
+    },
+    {
+        "auther": "ynlxc ",
+        "company": "Orange Staging",
+        "location": "Beijing China",
+        "contact": "yndxx@qq.com",
+        "business scenario": "build a company data center",
+        "公司": "桔子分期",
+        "地点": "中国北京",
+        "联系方式": "yndxx@qq.com",
+        "业务场景": "搭建公司数据中台",
+    },
+    {
+        "auther": "chenxi0599 ",
+        "company": "Guangzhou Yunxun Technology",
+        "location": "Guangzhou  China",
+        "contact": "179695222@qq.com",
+        "business scenario": "Data Marketing Data Middle Office-Data R&D Platform",
+        "公司": "广州云徙科技",
+        "地点": "中国广州",
+        "联系方式": "179695222@qq.com",
+        "业务场景": "数据营销数据中台-数据研发平台",
+
+
+    },
+    {
+        "auther": "liumingning ",
+        "company": "In-laws Network",
+        "location": "Beijing China",
+        "contact": "547761853@qq.com",
+        "business scenario": "the connector between the data base component and the user interaction component.",
+        "公司": "亲家网络",
+        "地点": "中国北京",
+        "联系方式": "547761853@qq.com",
+        "业务场景": "数据基础组件与用户交互组件之间的连接器。",
+
+
+    },
+    {
+        "auther": "caoerbiao ",
+        "company": "CLP Big Data Research Institute",
+        "location": "Guiyang",
+        "contact": "540765772@qq.com",
+        "business scenario": "big data service platform data center",
+        "公司": "中电科大数据研究院",
+        "地点": "贵阳",
+        "联系方式": "540765772@qq.com",
+        "业务场景": "大数据服务平台数据中台"
+
+
+    },
+    {
+        "auther": "Wzhipeng ",
+        "company": "fordeal",
+        "location": "Guangzhou China",
+        "contact": "476855740@qq.com",
+        "business scenario": "Connectors between data foundation components and user interaction components",
+        "公司": "fordeal",
+        "地点": "中国广州",
+        "联系方式": "476855740@qq.com",
+        "业务场景": "数据基础组件与用户交互组件之间的连接器",
+
+
+    },
+    {
+        "auther": "swt88 ",
+        "company": "Red Elephant Yunteng",
+        "location": "Beijing China",
+        "contact": "634995025@qq.com",
+        "business scenario": "In order to achieve a good multi-tenancy",
+        "公司": "红象云腾",
+        "地点": "中国北京",
+        "联系方式": "634995025@qq.com",
+        "业务场景": "为了能很好的实现多租户",
+
+
+    },
+    {
+        "auther": "itwhat126 ",
+        "company": "Nanjing Austrian Information Industry Co.Ltd.",
+        "location": "Nanjing",
+        "contact": "819398357@qq.com",
+        "business scenario": "research on company data middle-office solutions",
+        "公司": "南京奥派信息产业股份公司",
+        "地点": "南京",
+        "联系方式": "819398357@qq.com",
+        "业务场景": "公司数据中台解决方案调研",
+
+
+    },
+    {
+        "auther": "Jesseszhang ",
+        "company": "Ping An Property& Casualty",
+        "location": "Shenzhen China",
+        "contact": "1355077450@qq.com",
+        "business scenario": "one-stop data platform solution research",
+        "公司": "平安产险",
+        "地点": "中国深圳",
+        "联系方式": "1355077450@qq.com",
+        "业务场景": "一站式数据平台解决方案调研",
+    },
+    {
+        "auther": "Wf675721680 ",
+        "company": "Baiwangyun",
+        "location": "Beijing",
+        "contact": "wufei@baiwang.com",
+        "business scenario": "data governance",
+        "公司": "百望云",
+        "地点": "北京",
+        "联系方式": "wufei@baiwang.com",
+        "业务场景": "数据治理",
+    },
+    {
+        "auther": "Adamyuanyuan ",
+        "company": "China Telecom",
+        "location": "Shanghai /Beijing, etc.",
+        "contact": "913546481@qq.com",
+        "business scenario": "one-stop data platform solution research",
+        "公司": "中国电信",
+        "地点": "上海/北京等",
+        "联系方式": "913546481@qq.com",
+        "业务场景": "一站式数据平台解决方案调研"
+    },
+    {
+        "auther": "brianzhangrong ",
+        "company": "Aijia Life",
+        "location": "Jiangsu/Nanjing",
+        "contact": "693404752@qq.com",
+        "business scenario": "data center",
+        "公司": "艾佳生活",
+        "地点": "江苏/南京",
+        "联系方式": "693404752@qq.com",
+        "业务场景": "数据中台",
+    },
+    {
+        "auther": "NicholasHua ",
+        "company": "Nanjing Aokan Technology Co.Ltd.",
+        "location": "Jiangsu/Nanjing",
+        "contact": "774664386@qq.com",
+        "business Scenario": "Research on Data Center Program",
+        "公司": "南京奥看科技有限公司",
+        "地点": "江苏/南京",
+        "联系方式": "774664386@qq.com",
+        "业务场景": "数据中台方案调研",
+
+
+    },
+    {
+        "auther": "Tandoy ",
+        "company": "Credit Life (Guangzhou) Intelligent Technology Co., Ltd.",
+        "location": "Guangdong Guangzhou",
+        "contact": "tangzhi@wecreditlife.com",
+        "business scenario": "Data-centered financial industry solutions are in daily development and use.",
+        "公司": "信用生活(广州)智能科技有限公司",
+        "地点": "广东/广州",
+        "联系方式": "tangzhi@wecreditlife.com",
+        "业务场景": "数据中台金融行业解决方案,日常开发使用中。",
+    },
+    {
+        "auther": "jacktao007 ",
+        "company": "Zhongtong Service Public Information Co.Ltd.",
+        "location": "Xinjiang/Urumqi",
+        "contact": "7956214@qq.com",
+        "business Scenario": "Research on Data Center Program",
+        "公司": "中通服公众信息股份有限公司",
+        "地点": "新疆/乌鲁木齐",
+        "联系方式": "7956214@qq.com",
+        "业务场景": "数据中台方案调研",
+
+
+    },
+    {
+        "auther": "dddyszy ",
+        "company": "Mobtech",
+        "location": "Shanghai China",
+        "contact": "shizhouyi123@gmail.com",
+        "business scenario": "One-stop data development platform",
+        "公司": "Mobtech",
+        "地点": "中国上海",
+        "联系方式": "shizhouyi123@gmail.com",
+        "业务场景": "一站式数据开发平台",
+    },
+    {
+        "auther": "zhangxhmuye ",
+        "company": "Zhijiang Laboratory",
+        "location": "Hangzhou China",
+        "contact": "905896929@qq.com",
+        "business scenario": "build a company data center and provide a one-stop development platform for data development",
+        "公司": "之江实验室",
+        "地点": "中国杭州",
+        "联系方式": "905896929@qq.com",
+        "业务场景": "搭建公司数据中台,向数据开发提供一站式开发平台",
+
+    },
+    {
+        "auther": "lichuang22 ",
+        "company": "Dongfangtong",
+        "location": "Wuhan",
+        "method": "1429883071@qq.com",
+        "business Scenario": "Research on Data Center Program",
+        "公司": "东方通",
+        "地点": "武汉",
+        "方式": "1429883071@qq.com",
+        "业务场景": "数据中台方案调研",
+
+
+    },
+    {
+        "auther": "nlallen ",
+        "company": "Ideal Car",
+        "location": "Beijing",
+        "contact": "244495101@qq.com",
+        "business scenario": "Build the company's big data platform as the underlying computing governance engine",
+        "公司": "理想汽车",
+        "地点": "北京",
+        "联系方式": "244495101@qq.com",
+        "业务场景": "搭建公司大数据平台,作为底层计算治理引擎",
+    },
+    {
+        "auther": "linweijiang ",
+        "company": "Twenty-six Degrees Digital Technology (Guangzhou) Co., Ltd.",
+        "location": "Guangdong/Guangzhou",
+        "contact": "linweijiang@26dudt.com",
+        "business scenario": "One-stop data development platform (financial industry)",
+        "公司": "二十六度数字科技(广州)有限公司",
+        "地点": "广东/广州",
+        "联系方式": "linweijiang@26dudt.com",
+        "业务场景": "一站式数据开发平台(金融行业)",
+
+
+    },
+    {
+        "auther": "emerkfu ",
+        "company": "Shanghai Hemudu Industrial Development Co.Ltd.",
+        "location": "Shanghai",
+        "contact": "fushuai@homedo.com",
+        "business scenario": "One-stop big data development platform",
+        "公司": "上海河姆渡实业发展有限公司",
+        "地点": "上海",
+        "联系方式": "fushuai@homedo.com",
+        "业务场景": "一站式大数据开发平台",
+
+
+    },
+    {
+        "auther": "wyx94 ",
+        "company": "Zhaolian Consumer Finance Co. Ltd.",
+        "location": "Shenzhen",
+        "contact": "wangyuxing@mucfc.com",
+        "business scenario": "build the company's big data platform as the underlying computing engine",
+        "公司": "招联消费金融有限公司",
+        "地点": "深圳",
+        "联系方式": "wangyuxing@mucfc.com",
+        "业务场景": "搭建公司大数据平台,作为底层计算引擎"
+    },
+    {
+        "auther": "saLeox ",
+        "company": "Sea Limited",
+        "location": "Singapore",
+        "contact": "sunshun18@126.com",
+        "business scenario": "Investigate the middleware for big data platform.",
+        "公司": "冬海集团",
+        "地点": "新加坡",
+        "联系方式": "sunshun18@126.com",
+        "业务场景": "调用大数据平台的中间件"
+    }
+]
diff --git a/src/pages/home.vue b/src/pages/home/index.vue
similarity index 90%
rename from src/pages/home.vue
rename to src/pages/home/index.vue
index 8876db3..07205ba 100644
--- a/src/pages/home.vue
+++ b/src/pages/home/index.vue
@@ -15,12 +15,12 @@
         <p class="home-paragraph">{{$t('message.home.introduce.before_text')}}
 
         </p>
-        <img src="../assets/home/before_linkis_en.png" alt="before" class="concept-image">
+        <img src="../../assets/home/before_linkis_en.png" alt="before" class="concept-image">
       </div>
       <div class="concept-item">
         <h3 class="concept-title">{{$t('message.home.introduce.after')}}</h3>
         <p class="home-paragraph">{{$t('message.home.introduce.after_text')}}</p>
-        <img src="../assets/home/after_linkis_en.png" alt="after" class="concept-image">
+        <img src="../../assets/home/after_linkis_en.png" alt="after" class="concept-image">
       </div>
     </div>
     <div class="description home-block">
@@ -34,7 +34,7 @@
           <a href="/#/docs/architecture/DifferenceBetween1.0&0.x" class="corner-botton blue">{{$t('message.common.learn_more')}}</a>
         </div>
       </div>
-      <img src="../assets/home/description.png" alt="description" class="description-image">
+      <img src="../../assets/home/description.png" alt="description" class="description-image">
     </div>
     <h1 class="home-block-title text-center">{{$t('message.common.core_features')}}</h1>
     <div class="features home-block">
@@ -76,8 +76,10 @@
     </div>
     <h1 class="home-block-title text-center">{{$t('message.common.our_users')}}</h1>
     <div class="show-case home-block">
-      <div class="case-item"></div>
-      <div class="case-item"></div>
+      <div class="case-item"><img src="../../assets/user/97wulian.png" alt="xx"/></div>
+      <div class="case-item"><img src="../../assets/user/aisino.png" alt="xx"/></div>
+      <div class="case-item"><img src="../../assets/user/boss.png" alt="xx"/></div>
+      <div class="case-item"><img src="../../assets/user/huazhong.jpg" alt="xx"/></div>
       <div class="case-item"></div>
       <div class="case-item"></div>
       <div class="case-item"></div>
@@ -144,6 +146,7 @@
       grid-column-gap: 20px;
       .case-item{
         height: 88px;
+        width:167px;
         background: #FFFFFF;
         box-shadow: 0 1px 20px 0 rgba(15,18,34,0.10);
         border-radius: 8px;
@@ -221,7 +224,7 @@
 </style>
 <script setup>
   import { ref } from "vue"
-  import  systemConfiguration from  "../js/config"
+  import  systemConfiguration from "../../js/config"
   // 初始化语言
   const lang = ref(localStorage.getItem('locale') || 'en');
 </script>
diff --git a/src/router.js b/src/router.js
index 50891a8..0a0a278 100644
--- a/src/router.js
+++ b/src/router.js
@@ -1,7 +1,7 @@
 const routes = [{
     path: '/',
     name: 'home',
-    component: () => import( /* webpackChunkName: "group-home" */ './pages/home.vue')
+    component: () => import( /* webpackChunkName: "group-home" */ './pages/home/index.vue')
   },
   {
     path: '/docs',

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 47/50: Merge pull request #8 from lucaszhu2zgf/asf-staging

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 76ffb1f8d97934eaf141c4ffc795ffd20df31be5
Merge: c59e3c1 9d11c8e
Author: johnnywang <wp...@gmail.com>
AuthorDate: Thu Oct 28 19:44:09 2021 +0800

    Merge pull request #8 from lucaszhu2zgf/asf-staging
    
    web visual optimization

 .asf.yaml                                              |   5 ++---
 assets/360.bc39c47a.png                                | Bin 14323 -> 0 bytes
 assets/360.cd40bc4b.png                                | Bin 0 -> 5121 bytes
 assets/404.f24f37c0.js                                 |   1 +
 "assets/97\347\211\251\350\201\224.159781fb.png"       | Bin 0 -> 5949 bytes
 "assets/97\347\211\251\350\201\224.2447251c.png"       | Bin 28819 -> 0 bytes
 assets/AddEngineConn.467c2210.js                       |   1 -
 assets/ECM-01.bb056ebe.png                             | Bin 0 -> 34340 bytes
 assets/ECM-02.a90e3890.png                             | Bin 0 -> 25340 bytes
 assets/Linkis1.0-architecture.be03428f.png             | Bin 0 -> 72168 bytes
 assets/Linkis_1.0_architecture.ba18dcdc.png            | Bin 0 -> 316746 bytes
 "assets/T3\345\207\272\350\241\214.1738b528.png"       | Bin 6413 -> 0 bytes
 "assets/T3\345\207\272\350\241\214.9d8b64de.png"       | Bin 0 -> 15872 bytes
 assets/add_an_engineConn_flow_chart.5a1c06c5.js        |   1 +
 assets/add_engine.b12c7e06.js                          |   1 +
 assets/after_linkis_bg.31ad71dc.png                    | Bin 0 -> 7029 bytes
 assets/after_linkis_cn.f311973b.png                    | Bin 0 -> 645519 bytes
 assets/after_linkis_en.c3ed71bf.png                    | Bin 111924 -> 0 bytes
 assets/after_linkis_en.eafe79c9.png                    | Bin 0 -> 33986 bytes
 assets/after_linkis_zh.bf948a76.png                    | Bin 0 -> 31918 bytes
 assets/app-manager-02.2aff8a98.png                     | Bin 0 -> 701283 bytes
 assets/app-manager-03.5aaff6ed.png                     | Bin 0 -> 69489 bytes
 assets/app_manager.bed25273.js                         |   1 +
 assets/banner_bg.b3665793.png                          | Bin 0 -> 136546 bytes
 assets/before_linkis_cn.6c6e76e4.png                   | Bin 0 -> 332201 bytes
 assets/before_linkis_en.076cf10c.png                   | Bin 142195 -> 0 bytes
 assets/before_linkis_en.58065890.png                   | Bin 0 -> 46019 bytes
 assets/before_linkis_zh.2ec86cff.png                   | Bin 0 -> 43458 bytes
 assets/bml-02.0eb3b26a.png                             | Bin 0 -> 55227 bytes
 assets/bml.59ba7d32.js                                 |   1 +
 "assets/boss\347\233\264\350\201\230.5353720c.png"     | Bin 8386 -> 0 bytes
 assets/computation_governance.3a8ad59d.js              |   1 +
 assets/configuration.a2fe2e50.js                       |   1 +
 assets/connectivity.7ada0256.png                       | Bin 0 -> 5136 bytes
 ...nsoleUserManual.d2af8060.js => console.ec03cad4.js} |   2 +-
 assets/context_service.13b75bb1.js                     |   1 +
 assets/contributing.e1c72372.js                        |   1 +
 assets/controllability.c2cb45d7.png                    | Bin 0 -> 4808 bytes
 assets/datasource.d410aafc.js                          |   1 +
 assets/description.95f7a296.png                        | Bin 28065 -> 0 bytes
 assets/description.bee4d876.png                        | Bin 0 -> 44834 bytes
 ...tween1.0&0.x.7e9c261e.js => difference.546832ac.js} |   2 +-
 ...distributed.6a61f64e.js => distributed.89154171.js} |   2 +-
 assets/download.0330f828.css                           |   1 +
 assets/download.4f121175.js                            |   1 +
 assets/download.8c6e40f3.css                           |   1 -
 assets/download.c3e47cb5.js                            |   1 -
 assets/engine_start_process.f86c8e8a.js                |   1 +
 assets/engineconn-01.b4d20b76.png                      | Bin 0 -> 157753 bytes
 assets/engineconn.efe3f534.js                          |   1 +
 assets/engineconn_manager.563abdf4.js                  |   1 +
 assets/engineconn_plugin.0c1c8f49.js                   |   1 +
 assets/{engins.2a41b1a0.js => engins.a82546f2.js}      |   2 +-
 assets/event.29571be3.js                               |   1 -
 assets/event.b677bf34.js                               |   1 +
 assets/features_bg.2b28bb9d.png                        | Bin 0 -> 120511 bytes
 assets/gateway.b29c03a6.js                             |   1 +
 assets/gateway_server_dispatcher.d2241ca2.png          | Bin 0 -> 47910 bytes
 assets/gateway_server_global.9fae8e50.png              | Bin 0 -> 36652 bytes
 assets/gatway_websocket.3d3c7dfa.png                   | Bin 0 -> 16292 bytes
 assets/hive-config.b2dec89f.png                        | Bin 0 -> 44717 bytes
 assets/hive-run.6aa39a3f.png                           | Bin 0 -> 31403 bytes
 assets/hive.c59e195d.js                                |   1 +
 .../{HowToUse.212b1469.js => how_to_use.24a56e5f.js}   |   2 +-
 assets/index.11bb1268.js                               |   1 +
 assets/index.187b32e3.js                               |   1 +
 assets/index.2b54ad83.css                              |   1 +
 assets/index.2da1dc18.js                               |   1 -
 assets/{index.c93f08c9.js => index.491f620b.js}        |   2 +-
 assets/index.5a6d4e60.js                               |   1 -
 assets/index.6baed6d3.css                              |   1 +
 assets/index.77f4f836.css                              |   1 -
 assets/index.82f016e4.css                              |   1 -
 assets/index.8d1f9740.js                               |   1 -
 assets/index.97098d19.js                               |   1 +
 assets/index.9c41b9ea.js                               |   1 +
 assets/index.9fb4d9d9.js                               |   1 +
 assets/index.b0fb8393.js                               |   1 +
 assets/index.ba4cbe23.js                               |   1 +
 assets/index.c319b82e.js                               |   1 +
 assets/index.c51fb506.js                               |   1 -
 assets/index.c935709d.js                               |   1 +
 assets/{main.3104c8a7.js => index.cd1b8a2e.js}         |   2 +-
 assets/jdbc-conf.7cf06ba9.js                           |   1 +
 assets/jdbc-conf.9520dcb1.png                          | Bin 0 -> 46113 bytes
 assets/jdbc-run.b39db252.png                           | Bin 0 -> 21937 bytes
 assets/jdbc.4fc1629f.js                                |   1 +
 ...bmission.cf4b12e7.js => job_submission.5703dc56.js} |   2 +-
 assets/label-manager-01.530390e5.png                   | Bin 0 -> 39221 bytes
 assets/label_manager.6b95dcc1.js                       |   1 +
 assets/label_manager_builder.caf90f90.png              | Bin 0 -> 62978 bytes
 assets/label_manager_global.91aa80e7.png               | Bin 0 -> 14988 bytes
 assets/label_manager_scorer.fd531e4a.png               | Bin 0 -> 72977 bytes
 assets/linkis-computation-gov-01.6035615d.png          | Bin 0 -> 89527 bytes
 assets/linkis-computation-gov-02.43fad13f.png          | Bin 0 -> 179368 bytes
 assets/linkis-contextservice-01.3cb67fd1.png           | Bin 0 -> 9188 bytes
 assets/linkis-contextservice-02.321a8427.png           | Bin 0 -> 4953 bytes
 assets/linkis-engineconn-plugin-01.ca85467f.png        | Bin 0 -> 21864 bytes
 assets/linkis-intro-01.71fb2144.png                    | Bin 0 -> 413878 bytes
 assets/linkis-intro-03.65d1a7b1.png                    | Bin 0 -> 738141 bytes
 assets/linkis-manager-01.fb5e443a.png                  | Bin 0 -> 183082 bytes
 assets/linkis-microservice-gov-01.2e1292b0.png         | Bin 0 -> 46380 bytes
 assets/linkis-microservice-gov-03.9ece64b6.png         | Bin 0 -> 30388 bytes
 assets/linkis-publicservice-01.bc9338bf.png            | Bin 0 -> 25269 bytes
 assets/linkis.cdbb993f.js                              |   1 +
 assets/linkis.d0790396.js                              |   1 -
 .../{CliManual.8440dc3f.js => linkis_cli.56d856c4.js}  |   2 +-
 assets/logo.fb11029b.png                               | Bin 9114 -> 0 bytes
 assets/manager.6973d707.js                             |   1 +
 assets/microservice_governance.e72bfd46.js             |   1 +
 assets/mobtech.b333dc91.png                            | Bin 11676 -> 0 bytes
 assets/mobtech.e2567e09.png                            | Bin 0 -> 18229 bytes
 assets/orchestration.e1c8bd97.png                      | Bin 0 -> 4545 bytes
 assets/public-enhencement-architecture.6597436f.png    | Bin 0 -> 24844 bytes
 assets/public_enhancement.626e701e.js                  |   1 +
 assets/public_service.8f4dd101.js                      |   1 +
 assets/pyspakr-run.9c36d9ef.png                        | Bin 0 -> 43552 bytes
 assets/python-run.25fd075c.png                         | Bin 0 -> 61451 bytes
 assets/python.17efbf15.js                              |   1 +
 assets/queue-set.3007a0ca.png                          | Bin 0 -> 41298 bytes
 assets/resource-manager-01.86e09124.png                | Bin 0 -> 71086 bytes
 assets/resource_manager.ce0e10f4.js                    |   1 +
 assets/rm-03.8382829b.png                              | Bin 0 -> 52466 bytes
 assets/rm-04.2385c2db.png                              | Bin 0 -> 36324 bytes
 assets/rm-05.347294cd.png                              | Bin 0 -> 34066 bytes
 assets/rm-06.dde9d64d.png                              | Bin 0 -> 44105 bytes
 assets/scala-run.62f19952.png                          | Bin 0 -> 43959 bytes
 assets/searching_keywords.41a60149.png                 | Bin 0 -> 53652 bytes
 assets/shell-run.6a5566b5.png                          | Bin 0 -> 100312 bytes
 assets/shell.06015d78.js                               |   1 +
 assets/spark-conf.9e59a279.png                         | Bin 0 -> 53397 bytes
 assets/spark.e086b785.js                               |   1 +
 .../{structure.1bc4dbfc.js => structure.2309b7ab.js}   |   2 +-
 assets/{team.13ce5e55.css => team.04f1ab61.css}        |   2 +-
 assets/team.c0178c87.js                                |   1 -
 assets/team.e10d896f.js                                |   1 +
 assets/tuning.45470047.js                              |   1 +
 assets/{UserManual.905b8e9a.js => user.4c9df01e.js}    |   2 +-
 assets/{vendor.12a5b039.js => vendor.1180558b.js}      |  10 +++++-----
 assets/wedatasphere_contact_01.ce92bdb6.png            | Bin 0 -> 217762 bytes
 assets/wedatasphere_stack_Linkis.efef3aa3.png          | Bin 0 -> 203466 bytes
 assets/workflow.72652f4e.js                            |   1 +
 .../\344\270\234\346\226\271\351\200\232.4814e53c.png" | Bin 33873 -> 0 bytes
 .../\344\270\234\346\226\271\351\200\232.b2758d5e.png" | Bin 0 -> 6504 bytes
 ...3\345\275\251\347\247\221\346\212\200.d1ffcc7d.png" | Bin 31958 -> 0 bytes
 ...3\345\275\251\347\247\221\346\212\200.f0458dd2.png" | Bin 0 -> 6279 bytes
 ...5\345\233\275\347\224\265\347\247\221.5bf9bcd0.png" | Bin 0 -> 8258 bytes
 ...5\345\233\275\347\224\265\347\247\221.864feafc.jpg" | Bin 5955 -> 0 bytes
 ...2\344\277\241\346\234\215\345\212\241.6242b949.png" | Bin 13177 -> 0 bytes
 ...2\344\277\241\346\234\215\345\212\241.de1dbff8.png" | Bin 0 -> 25306 bytes
 ...5\351\200\232\344\272\221\344\273\223.a785e23f.png" | Bin 20138 -> 0 bytes
 ...5\351\200\232\344\272\221\344\273\223.c02b68a5.png" | Bin 0 -> 25395 bytes
 ...7\345\256\236\351\252\214\345\256\244.46d52eec.png" | Bin 11054 -> 0 bytes
 ...7\345\256\236\351\252\214\345\256\244.657671b0.png" | Bin 0 -> 29997 bytes
 ...1\345\276\222\347\247\221\346\212\200.d6b063f3.png" | Bin 35242 -> 0 bytes
 ...1\345\276\222\347\247\221\346\212\200.e101f4b2.png" | Bin 0 -> 9693 bytes
 "assets/\344\276\235\345\233\276.c76de0a6.png"         | Bin 0 -> 5467 bytes
 "assets/\344\276\235\345\233\276.e1935876.png"         | Bin 41437 -> 0 bytes
 ...1\347\224\250\347\224\237\346\264\273.bce0bb69.png" | Bin 0 -> 5910 bytes
 ...1\346\212\200\345\244\247\345\255\246.79502b9d.jpg" | Bin 12673 -> 0 bytes
 ...1\346\212\200\345\244\247\345\255\246.fcf29603.png" | Bin 0 -> 47926 bytes
 .../\345\223\227\345\225\246\345\225\246.045c3b9e.jpg" | Bin 5990 -> 0 bytes
 .../\345\223\227\345\225\246\345\225\246.2eef0fe4.png" | Bin 0 -> 26929 bytes
 ...0\345\244\226\345\220\214\345\255\246.2bb21f07.png" | Bin 0 -> 12193 bytes
 ...0\345\244\226\345\220\214\345\255\246.9c81d026.png" | Bin 8081 -> 0 bytes
 .../\345\244\251\347\277\274\344\272\221.719b17b2.png" | Bin 0 -> 9317 bytes
 .../\345\244\251\347\277\274\344\272\221.ee336756.png" | Bin 39592 -> 0 bytes
 "assets/\345\271\263\345\256\211.1f145bbc.png"         | Bin 0 -> 7990 bytes
 "assets/\345\271\263\345\256\211.d0212a59.png"         | Bin 20795 -> 0 bytes
 ...5\345\244\247\346\225\260\346\215\256.3da8e88f.png" | Bin 0 -> 24074 bytes
 ...5\345\244\247\346\225\260\346\215\256.d21c18fc.png" | Bin 7862 -> 0 bytes
 ...1\351\231\220\345\205\254\345\217\270.66cf4318.png" | Bin 29500 -> 0 bytes
 ...1\351\231\220\345\205\254\345\217\270.903c953e.png" | Bin 0 -> 9162 bytes
 ...5\351\255\202\347\275\221\347\273\234.3ec071b8.png" | Bin 5553 -> 0 bytes
 ...4\345\255\220\345\210\206\346\234\237.55aa406b.png" | Bin 6968 -> 0 bytes
 ...4\345\255\220\345\210\206\346\234\237.f980f03b.png" | Bin 0 -> 21206 bytes
 ...7\345\272\267\345\250\201\350\247\206.70f8122b.png" | Bin 22412 -> 0 bytes
 ...7\345\272\267\345\250\201\350\247\206.fb60f896.png" | Bin 0 -> 8505 bytes
 ...6\346\203\263\346\261\275\350\275\246.0123a918.png" | Bin 27672 -> 0 bytes
 ...6\346\203\263\346\261\275\350\275\246.c5e2739b.png" | Bin 0 -> 6895 bytes
 .../\347\231\276\346\234\233\344\272\221.77c04429.png" | Bin 0 -> 6790 bytes
 .../\347\231\276\346\234\233\344\272\221.c2c1293f.png" | Bin 24473 -> 0 bytes
 ...3\345\210\233\345\225\206\345\237\216.294fde8b.png" | Bin 24213 -> 0 bytes
 ...3\345\210\233\345\225\206\345\237\216.7f44a468.png" | Bin 0 -> 49107 bytes
 ...2\350\261\241\344\272\221\350\205\276.7417b5e6.png" | Bin 4596 -> 0 bytes
 ...2\350\261\241\344\272\221\350\205\276.929a5839.png" | Bin 0 -> 4757 bytes
 ...4\345\210\233\346\231\272\350\236\215.188edcec.png" | Bin 11438 -> 0 bytes
 ...4\345\210\233\346\231\272\350\236\215.808a8eaa.png" | Bin 0 -> 10382 bytes
 ...2\345\244\251\344\277\241\346\201\257.23b0d23c.png" | Bin 46944 -> 0 bytes
 ...2\345\244\251\344\277\241\346\201\257.e12022d3.png" | Bin 0 -> 11949 bytes
 ...6\344\275\263\347\224\237\346\264\273.26403b56.png" | Bin 0 -> 18851 bytes
 ...6\344\275\263\347\224\237\346\264\273.b508c1dc.jpg" | Bin 5444 -> 0 bytes
 "assets/\350\215\243\350\200\200.5a89cf66.png"         | Bin 0 -> 4898 bytes
 "assets/\350\215\243\350\200\200.ceda8b1e.png"         | Bin 7780 -> 0 bytes
 ...0\346\221\251\350\200\266\344\272\221.36d45d17.png" | Bin 0 -> 26898 bytes
 ...0\346\221\251\350\200\266\344\272\221.63ed5828.png" | Bin 19705 -> 0 bytes
 ...2\346\235\245\346\261\275\350\275\246.422c536e.png" | Bin 0 -> 7464 bytes
 ...2\346\235\245\346\261\275\350\275\246.be672a01.jpg" | Bin 7034 -> 0 bytes
 ...1\346\212\200\345\244\247\345\255\246.3762b76e.jpg" | Bin 12475 -> 0 bytes
 ...1\346\212\200\345\244\247\345\255\246.b4ea0700.png" | Bin 0 -> 10138 bytes
 ...6\347\202\271\350\275\257\344\273\266.389df8d5.png" | Bin 8796 -> 0 bytes
 ...6\347\202\271\350\275\257\344\273\266.e6044237.png" | Bin 0 -> 11299 bytes
 index.html                                             |   7 ++++---
 203 files changed, 68 insertions(+), 35 deletions(-)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 01/50: INIT PROJECT

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit f0aa2f4ef90800b1d1df414abc3c5113fa97e12c
Author: lucaszhu <lu...@webank.com>
AuthorDate: Mon Sep 27 17:27:25 2021 +0800

    INIT PROJECT
---
 .gitignore                    |   5 +
 .vscode/extensions.json       |   3 +
 README.md                     |  15 +++
 index.html                    |  13 +++
 package-lock.json             | 258 ++++++++++++++++++++++++++++++++++++++++++
 package.json                  |  16 +++
 public/favicon.ico            | Bin 0 -> 4286 bytes
 src/App.vue                   |  21 ++++
 src/assets/logo.png           | Bin 0 -> 6849 bytes
 src/components/HelloWorld.vue |  40 +++++++
 src/main.js                   |   4 +
 vite.config.js                |   7 ++
 12 files changed, 382 insertions(+)

diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..d451ff1
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,5 @@
+node_modules
+.DS_Store
+dist
+dist-ssr
+*.local
diff --git a/.vscode/extensions.json b/.vscode/extensions.json
new file mode 100644
index 0000000..3dc5b08
--- /dev/null
+++ b/.vscode/extensions.json
@@ -0,0 +1,3 @@
+{
+  "recommendations": ["johnsoncodehk.volar"]
+}
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..388afb6
--- /dev/null
+++ b/README.md
@@ -0,0 +1,15 @@
+# Linkis Web For Apache
+
+The project is specially for Linkis, based on the newest `vite` & `vue3`
+
+## Local Development
+
+```
+npm run dev
+```
+
+## Publish
+
+```
+npm run build
+```
\ No newline at end of file
diff --git a/index.html b/index.html
new file mode 100644
index 0000000..030a6ff
--- /dev/null
+++ b/index.html
@@ -0,0 +1,13 @@
+<!DOCTYPE html>
+<html lang="en">
+  <head>
+    <meta charset="UTF-8" />
+    <link rel="icon" href="/favicon.ico" />
+    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+    <title>Vite App</title>
+  </head>
+  <body>
+    <div id="app"></div>
+    <script type="module" src="/src/main.js"></script>
+  </body>
+</html>
diff --git a/package-lock.json b/package-lock.json
new file mode 100644
index 0000000..194ae65
--- /dev/null
+++ b/package-lock.json
@@ -0,0 +1,258 @@
+{
+  "name": "linkis-web-apache",
+  "version": "0.0.0",
+  "lockfileVersion": 1,
+  "requires": true,
+  "dependencies": {
+    "@babel/parser": {
+      "version": "7.15.7",
+      "resolved": "http://10.107.103.115:8001/@babel/parser/download/@babel/parser-7.15.7.tgz",
+      "integrity": "sha1-DD7UousHsWXfqFs8xFxyczTE7a4="
+    },
+    "@vitejs/plugin-vue": {
+      "version": "1.9.2",
+      "resolved": "http://10.107.103.115:8001/@vitejs/plugin-vue/download/@vitejs/plugin-vue-1.9.2.tgz",
+      "integrity": "sha1-cjTvuMPD1gx+rDUKk1B0qxggrg4=",
+      "dev": true
+    },
+    "@vue/compiler-core": {
+      "version": "3.2.19",
+      "resolved": "http://10.107.103.115:8001/@vue/compiler-core/download/@vue/compiler-core-3.2.19.tgz",
+      "integrity": "sha1-tTfdN3zlH9tk6bMOv7/3zXCmTLk=",
+      "requires": {
+        "@babel/parser": "^7.15.0",
+        "@vue/shared": "3.2.19",
+        "estree-walker": "^2.0.2",
+        "source-map": "^0.6.1"
+      }
+    },
+    "@vue/compiler-dom": {
+      "version": "3.2.19",
+      "resolved": "http://10.107.103.115:8001/@vue/compiler-dom/download/@vue/compiler-dom-3.2.19.tgz",
+      "integrity": "sha1-Bge8kN5q9V/ec7CbPE0L+MtZftg=",
+      "requires": {
+        "@vue/compiler-core": "3.2.19",
+        "@vue/shared": "3.2.19"
+      }
+    },
+    "@vue/compiler-sfc": {
+      "version": "3.2.19",
+      "resolved": "http://10.107.103.115:8001/@vue/compiler-sfc/download/@vue/compiler-sfc-3.2.19.tgz",
+      "integrity": "sha1-1BIZWpjr1JuEYC8XFxkpSh2VSb4=",
+      "requires": {
+        "@babel/parser": "^7.15.0",
+        "@vue/compiler-core": "3.2.19",
+        "@vue/compiler-dom": "3.2.19",
+        "@vue/compiler-ssr": "3.2.19",
+        "@vue/ref-transform": "3.2.19",
+        "@vue/shared": "3.2.19",
+        "estree-walker": "^2.0.2",
+        "magic-string": "^0.25.7",
+        "postcss": "^8.1.10",
+        "source-map": "^0.6.1"
+      }
+    },
+    "@vue/compiler-ssr": {
+      "version": "3.2.19",
+      "resolved": "http://10.107.103.115:8001/@vue/compiler-ssr/download/@vue/compiler-ssr-3.2.19.tgz",
+      "integrity": "sha1-PpHs9w+PlhxfY+rNITm82rmnoHw=",
+      "requires": {
+        "@vue/compiler-dom": "3.2.19",
+        "@vue/shared": "3.2.19"
+      }
+    },
+    "@vue/reactivity": {
+      "version": "3.2.19",
+      "resolved": "http://10.107.103.115:8001/@vue/reactivity/download/@vue/reactivity-3.2.19.tgz",
+      "integrity": "sha1-/G4PAQbylSJoNc/tX/X4TZJ76mU=",
+      "requires": {
+        "@vue/shared": "3.2.19"
+      }
+    },
+    "@vue/ref-transform": {
+      "version": "3.2.19",
+      "resolved": "http://10.107.103.115:8001/@vue/ref-transform/download/@vue/ref-transform-3.2.19.tgz",
+      "integrity": "sha1-zw+YZIa7JoOPvQl0npJ7qxl0VgA=",
+      "requires": {
+        "@babel/parser": "^7.15.0",
+        "@vue/compiler-core": "3.2.19",
+        "@vue/shared": "3.2.19",
+        "estree-walker": "^2.0.2",
+        "magic-string": "^0.25.7"
+      }
+    },
+    "@vue/runtime-core": {
+      "version": "3.2.19",
+      "resolved": "http://10.107.103.115:8001/@vue/runtime-core/download/@vue/runtime-core-3.2.19.tgz",
+      "integrity": "sha1-gHcVt/RyiruE+kqO/b432N20xtM=",
+      "requires": {
+        "@vue/reactivity": "3.2.19",
+        "@vue/shared": "3.2.19"
+      }
+    },
+    "@vue/runtime-dom": {
+      "version": "3.2.19",
+      "resolved": "http://10.107.103.115:8001/@vue/runtime-dom/download/@vue/runtime-dom-3.2.19.tgz",
+      "integrity": "sha1-fov2RXVHA+Ng+hMuS+kRPt8jd7s=",
+      "requires": {
+        "@vue/runtime-core": "3.2.19",
+        "@vue/shared": "3.2.19",
+        "csstype": "^2.6.8"
+      }
+    },
+    "@vue/server-renderer": {
+      "version": "3.2.19",
+      "resolved": "http://10.107.103.115:8001/@vue/server-renderer/download/@vue/server-renderer-3.2.19.tgz",
+      "integrity": "sha1-hwvOyffNruDCGHoWm25jarQ2L7E=",
+      "requires": {
+        "@vue/compiler-ssr": "3.2.19",
+        "@vue/shared": "3.2.19"
+      }
+    },
+    "@vue/shared": {
+      "version": "3.2.19",
+      "resolved": "http://10.107.103.115:8001/@vue/shared/download/@vue/shared-3.2.19.tgz",
+      "integrity": "sha1-ER7D2hgzfYYnREaYTEmSWxsrLdc="
+    },
+    "csstype": {
+      "version": "2.6.18",
+      "resolved": "http://10.107.103.115:8001/csstype/download/csstype-2.6.18.tgz",
+      "integrity": "sha1-mAqLUwhfNK8xNBCvBk8r0kF4Qhg="
+    },
+    "esbuild": {
+      "version": "0.12.29",
+      "resolved": "http://10.107.103.115:8001/esbuild/download/esbuild-0.12.29.tgz",
+      "integrity": "sha1-vmAtt8TceJRKnb3g0eoZ02wfiC0=",
+      "dev": true
+    },
+    "estree-walker": {
+      "version": "2.0.2",
+      "resolved": "http://10.107.103.115:8001/estree-walker/download/estree-walker-2.0.2.tgz",
+      "integrity": "sha1-UvAQF4wqTBF6d1fP6UKtt9LaTKw="
+    },
+    "fsevents": {
+      "version": "2.3.2",
+      "resolved": "http://10.107.103.115:8001/fsevents/download/fsevents-2.3.2.tgz",
+      "integrity": "sha1-ilJveLj99GI7cJ4Ll1xSwkwC/Ro=",
+      "dev": true,
+      "optional": true
+    },
+    "function-bind": {
+      "version": "1.1.1",
+      "resolved": "http://10.107.103.115:8001/function-bind/download/function-bind-1.1.1.tgz",
+      "integrity": "sha1-pWiZ0+o8m6uHS7l3O3xe3pL0iV0=",
+      "dev": true
+    },
+    "has": {
+      "version": "1.0.3",
+      "resolved": "http://10.107.103.115:8001/has/download/has-1.0.3.tgz",
+      "integrity": "sha1-ci18v8H2qoJB8W3YFOAR4fQeh5Y=",
+      "dev": true,
+      "requires": {
+        "function-bind": "^1.1.1"
+      }
+    },
+    "is-core-module": {
+      "version": "2.6.0",
+      "resolved": "http://10.107.103.115:8001/is-core-module/download/is-core-module-2.6.0.tgz",
+      "integrity": "sha1-11U7JSb+Wbkro+QMjfdX7Ipwnhk=",
+      "dev": true,
+      "requires": {
+        "has": "^1.0.3"
+      }
+    },
+    "magic-string": {
+      "version": "0.25.7",
+      "resolved": "http://10.107.103.115:8001/magic-string/download/magic-string-0.25.7.tgz",
+      "integrity": "sha1-P0l9b9NMZpxnmNy4IfLvMfVEUFE=",
+      "requires": {
+        "sourcemap-codec": "^1.4.4"
+      }
+    },
+    "nanocolors": {
+      "version": "0.2.9",
+      "resolved": "http://10.107.103.115:8001/nanocolors/download/nanocolors-0.2.9.tgz",
+      "integrity": "sha1-MZxeenNXGr1g5NJzFQwsuVAXrFs="
+    },
+    "nanoid": {
+      "version": "3.1.28",
+      "resolved": "http://10.107.103.115:8001/nanoid/download/nanoid-3.1.28.tgz",
+      "integrity": "sha1-PAG6wUy2xWgFaQFMxlovJkJMa9Q="
+    },
+    "path-parse": {
+      "version": "1.0.7",
+      "resolved": "http://10.107.103.115:8001/path-parse/download/path-parse-1.0.7.tgz",
+      "integrity": "sha1-+8EUtgykKzDZ2vWFjkvWi77bZzU=",
+      "dev": true
+    },
+    "postcss": {
+      "version": "8.3.8",
+      "resolved": "http://10.107.103.115:8001/postcss/download/postcss-8.3.8.tgz",
+      "integrity": "sha1-nr4qEnOWtLRXCun3dw5/uD2yusE=",
+      "requires": {
+        "nanocolors": "^0.2.2",
+        "nanoid": "^3.1.25",
+        "source-map-js": "^0.6.2"
+      }
+    },
+    "resolve": {
+      "version": "1.20.0",
+      "resolved": "http://10.107.103.115:8001/resolve/download/resolve-1.20.0.tgz",
+      "integrity": "sha1-YpoBP7P3B1XW8LeTXMHCxTeLGXU=",
+      "dev": true,
+      "requires": {
+        "is-core-module": "^2.2.0",
+        "path-parse": "^1.0.6"
+      }
+    },
+    "rollup": {
+      "version": "2.57.0",
+      "resolved": "http://10.107.103.115:8001/rollup/download/rollup-2.57.0.tgz",
+      "integrity": "sha1-wWlEdesi4QIkd8D0Y1/QrIBxMXM=",
+      "dev": true,
+      "requires": {
+        "fsevents": "~2.3.2"
+      }
+    },
+    "source-map": {
+      "version": "0.6.1",
+      "resolved": "http://10.107.103.115:8001/source-map/download/source-map-0.6.1.tgz",
+      "integrity": "sha1-dHIq8y6WFOnCh6jQu95IteLxomM="
+    },
+    "source-map-js": {
+      "version": "0.6.2",
+      "resolved": "http://10.107.103.115:8001/source-map-js/download/source-map-js-0.6.2.tgz",
+      "integrity": "sha1-C7XeYxtBz72mz7qL0FqA79/SOF4="
+    },
+    "sourcemap-codec": {
+      "version": "1.4.8",
+      "resolved": "http://10.107.103.115:8001/sourcemap-codec/download/sourcemap-codec-1.4.8.tgz",
+      "integrity": "sha1-6oBL2UhXQC5pktBaOO8a41qatMQ="
+    },
+    "vite": {
+      "version": "2.5.10",
+      "resolved": "http://10.107.103.115:8001/vite/download/vite-2.5.10.tgz",
+      "integrity": "sha1-xZjjtafhlW/8Uus7NCDRd/wu0qU=",
+      "dev": true,
+      "requires": {
+        "esbuild": "^0.12.17",
+        "fsevents": "~2.3.2",
+        "postcss": "^8.3.6",
+        "resolve": "^1.20.0",
+        "rollup": "^2.38.5"
+      }
+    },
+    "vue": {
+      "version": "3.2.19",
+      "resolved": "http://10.107.103.115:8001/vue/download/vue-3.2.19.tgz",
+      "integrity": "sha1-2iyApqAnHHCX/unjFpKt/Z1WnI8=",
+      "requires": {
+        "@vue/compiler-dom": "3.2.19",
+        "@vue/compiler-sfc": "3.2.19",
+        "@vue/runtime-dom": "3.2.19",
+        "@vue/server-renderer": "3.2.19",
+        "@vue/shared": "3.2.19"
+      }
+    }
+  }
+}
diff --git a/package.json b/package.json
new file mode 100644
index 0000000..5dd57e9
--- /dev/null
+++ b/package.json
@@ -0,0 +1,16 @@
+{
+  "name": "linkis-web-apache",
+  "version": "0.0.0",
+  "scripts": {
+    "dev": "vite",
+    "build": "vite build",
+    "serve": "vite preview"
+  },
+  "dependencies": {
+    "vue": "^3.2.13"
+  },
+  "devDependencies": {
+    "@vitejs/plugin-vue": "^1.9.0",
+    "vite": "^2.5.10"
+  }
+}
diff --git a/public/favicon.ico b/public/favicon.ico
new file mode 100644
index 0000000..df36fcf
Binary files /dev/null and b/public/favicon.ico differ
diff --git a/src/App.vue b/src/App.vue
new file mode 100644
index 0000000..7422330
--- /dev/null
+++ b/src/App.vue
@@ -0,0 +1,21 @@
+<script setup>
+// This starter template is using Vue 3 <script setup> SFCs
+// Check out https://v3.vuejs.org/api/sfc-script-setup.html#sfc-script-setup
+import HelloWorld from './components/HelloWorld.vue'
+</script>
+
+<template>
+  <img alt="Vue logo" src="./assets/logo.png" />
+  <HelloWorld msg="Hello Vue 3 + Vite" />
+</template>
+
+<style>
+#app {
+  font-family: Avenir, Helvetica, Arial, sans-serif;
+  -webkit-font-smoothing: antialiased;
+  -moz-osx-font-smoothing: grayscale;
+  text-align: center;
+  color: #2c3e50;
+  margin-top: 60px;
+}
+</style>
diff --git a/src/assets/logo.png b/src/assets/logo.png
new file mode 100644
index 0000000..f3d2503
Binary files /dev/null and b/src/assets/logo.png differ
diff --git a/src/components/HelloWorld.vue b/src/components/HelloWorld.vue
new file mode 100644
index 0000000..48a5ca9
--- /dev/null
+++ b/src/components/HelloWorld.vue
@@ -0,0 +1,40 @@
+<script setup>
+import { ref } from 'vue'
+
+defineProps({
+  msg: String
+})
+
+const count = ref(0)
+</script>
+
+<template>
+  <h1>{{ msg }}</h1>
+
+  <p>
+    Recommended IDE setup:
+    <a href="https://code.visualstudio.com/" target="_blank">VSCode</a>
+    +
+    <a href="https://github.com/johnsoncodehk/volar" target="_blank">Volar</a>
+  </p>
+
+  <p>
+    <a href="https://vitejs.dev/guide/features.html" target="_blank">
+      Vite Documentation
+    </a>
+    |
+    <a href="https://v3.vuejs.org/" target="_blank">Vue 3 Documentation</a>
+  </p>
+
+  <button type="button" @click="count++">count is: {{ count }}</button>
+  <p>
+    Edit
+    <code>components/HelloWorld.vue</code> to test hot module replacement.
+  </p>
+</template>
+
+<style scoped>
+a {
+  color: #42b983;
+}
+</style>
diff --git a/src/main.js b/src/main.js
new file mode 100644
index 0000000..01433bc
--- /dev/null
+++ b/src/main.js
@@ -0,0 +1,4 @@
+import { createApp } from 'vue'
+import App from './App.vue'
+
+createApp(App).mount('#app')
diff --git a/vite.config.js b/vite.config.js
new file mode 100644
index 0000000..315212d
--- /dev/null
+++ b/vite.config.js
@@ -0,0 +1,7 @@
+import { defineConfig } from 'vite'
+import vue from '@vitejs/plugin-vue'
+
+// https://vitejs.dev/config/
+export default defineConfig({
+  plugins: [vue()]
+})

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 05/50: ADD: 增加首页两个模块

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 4324084960a47cc24771788ed05308e9bfff675f
Author: lucaszhu <lu...@webank.com>
AuthorDate: Wed Sep 29 16:21:13 2021 +0800

    ADD: 增加首页两个模块
---
 src/pages/home.vue | 90 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 90 insertions(+)

diff --git a/src/pages/home.vue b/src/pages/home.vue
index c722258..321f7c2 100644
--- a/src/pages/home.vue
+++ b/src/pages/home.vue
@@ -8,12 +8,102 @@
         <a href="/" class="corner-botton white">GitHub</a>
       </div>
     </div>
+    <h1 class="home-block-title text-center">Computation Governance Concept</h1>
+    <h1 class="home-block-title text-center">Core Features</h1>
+    <div class="features home-block">
+      <div class="feature-item">
+        <div class="item-content">
+          <h3 class="item-title">Connectivity</h3>
+          <p class="item-desc">Simplify the operation environment; decouple the upper and lower layers, which make the upper layer insensitive when bottom layers changed</p>
+        </div>
+      </div>
+      <div class="feature-item">
+        <div class="item-content">
+          <h3 class="item-title">Scalability</h3>
+          <p class="item-desc">Distributed microservice architecture with great scalability and extensibility; quickly integrate with the new underlying engine</p>
+        </div>
+      </div>
+      <div class="feature-item">
+        <div class="item-content">
+          <h3 class="item-title">Controllability</h3>
+          <p class="item-desc">Converge engine entrance, unify identity verification, high-risk prevention and control, audit records; label-based multi-level refined resource control and recovery capabilities</p>
+        </div>
+      </div>
+      <div class="feature-item">
+        <div class="item-content">
+          <h3 class="item-title">Orchestration</h3>
+          <p class="item-desc">Computing strategy design based on active-active, mixed computing, transcation Orchestrator Service</p>
+        </div>
+      </div>
+      <div class="feature-item">
+        <div class="item-content">
+          <h3 class="item-title">Reusability</h3>
+          <p class="item-desc">Highly reduced the back-end development workload of upper-level applications development; Swiftly and efficiently build a data platform tool suite based on Linkis</p>
+        </div>
+      </div>
+    </div>
+    <h1 class="home-block-title text-center">Showcase</h1>
+    <div class="show-case home-block">
+      <div class="case-item"></div>
+      <div class="case-item"></div>
+      <div class="case-item"></div>
+      <div class="case-item"></div>
+      <div class="case-item"></div>
+      <div class="case-item"></div>
+      <div class="case-item"></div>
+    </div>
   </div>
 </template>
 <style lang="less" scoped>
   @import url('/src/style/base.less');
 
   .home-page {
+    .home-block-title{
+      font-size: 32px;
+      line-height: 46px;
+    }
+    .home-block{
+      padding: 20px 0 88px;
+    }
+    .show-case{
+      display: grid;
+      grid-template-columns: repeat(5, 1fr);
+      grid-row-gap: 20px;
+      grid-column-gap: 20px;
+      .case-item{
+        height: 88px;
+        background: #FFFFFF;
+        box-shadow: 0 1px 20px 0 rgba(15,18,34,0.10);
+        border-radius: 8px;
+      }
+    }
+    .features{
+      display: grid;
+      grid-template-columns: repeat(5, 1fr);
+      grid-column-gap: 20px;
+      .feature-item{
+        height: 370px;
+        background: #FFFFFF;
+        box-shadow: 0 0 16px 0 rgba(211,211,211,0.50);
+        border-radius: 10px;
+        .item-content{
+          padding: 30px 20px;
+          text-align: left;
+          .item-title{
+            margin-bottom: 20px;
+            color: #393939;
+            line-height: 26px;
+            font-size: 18px;
+            font-weight: 500;
+          }
+          .item-desc{
+            color: #666666;
+            line-height: 22px;
+            font-weight: 400;
+          }
+        }
+      }
+    }
     .banner {
       padding: 168px 0;
       .home-title {

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 11/50: FIX: 调整访问过链接的颜色

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 7ef9b63ade48ea75f07bbf2a44ffad8ffd2638e5
Author: lucaszhu <lu...@webank.com>
AuthorDate: Thu Sep 30 15:23:24 2021 +0800

    FIX: 调整访问过链接的颜色
---
 src/style/base.less | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/src/style/base.less b/src/style/base.less
index c4926ca..f44d3b9 100644
--- a/src/style/base.less
+++ b/src/style/base.less
@@ -36,6 +36,10 @@ a {
   text-decoration: none;
 }
 
+a:visited {
+  color: @enhance-color;
+}
+
 .ctn-block {
   width: 1200px;
   padding: 0 20px;

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 12/50: ADD: 增加语言切换功能

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 1e6cb34678a259804493283cc711a2fbcfc06848
Author: lucaszhu <lu...@webank.com>
AuthorDate: Fri Oct 8 16:47:53 2021 +0800

    ADD: 增加语言切换功能
---
 package-lock.json  | 53 ++++++++++++++++++++++++++++++++++
 package.json       |  1 +
 src/App.vue        | 84 ++++++++++++++++++++++++++++++++++++++++++++++++------
 src/i18n/en.json   | 10 +++++++
 src/i18n/index.js  | 48 +++++++++++++++++++++++++++++++
 src/i18n/zh.json   | 10 +++++++
 src/main.js        |  4 ++-
 src/pages/home.vue |  2 +-
 8 files changed, 202 insertions(+), 10 deletions(-)

diff --git a/package-lock.json b/package-lock.json
index d195bbd..a56c21f 100644
--- a/package-lock.json
+++ b/package-lock.json
@@ -9,6 +9,48 @@
       "resolved": "http://10.107.103.115:8001/@babel/parser/download/@babel/parser-7.15.7.tgz",
       "integrity": "sha1-DD7UousHsWXfqFs8xFxyczTE7a4="
     },
+    "@intlify/core-base": {
+      "version": "9.2.0-beta.11",
+      "resolved": "http://10.107.103.115:8001/@intlify/core-base/download/@intlify/core-base-9.2.0-beta.11.tgz",
+      "integrity": "sha1-G1eahTLsL33C3c95myXDr+bsq8Q=",
+      "requires": {
+        "@intlify/devtools-if": "9.2.0-beta.11",
+        "@intlify/message-compiler": "9.2.0-beta.11",
+        "@intlify/shared": "9.2.0-beta.11",
+        "@intlify/vue-devtools": "9.2.0-beta.11"
+      }
+    },
+    "@intlify/devtools-if": {
+      "version": "9.2.0-beta.11",
+      "resolved": "http://10.107.103.115:8001/@intlify/devtools-if/download/@intlify/devtools-if-9.2.0-beta.11.tgz",
+      "integrity": "sha1-uclQ7jpkbcobno+4f3qQsth0pPs=",
+      "requires": {
+        "@intlify/shared": "9.2.0-beta.11"
+      }
+    },
+    "@intlify/message-compiler": {
+      "version": "9.2.0-beta.11",
+      "resolved": "http://10.107.103.115:8001/@intlify/message-compiler/download/@intlify/message-compiler-9.2.0-beta.11.tgz",
+      "integrity": "sha1-KkH9WL+UcZFAVTdAbqENN6assE0=",
+      "requires": {
+        "@intlify/shared": "9.2.0-beta.11",
+        "source-map": "0.6.1"
+      }
+    },
+    "@intlify/shared": {
+      "version": "9.2.0-beta.11",
+      "resolved": "http://10.107.103.115:8001/@intlify/shared/download/@intlify/shared-9.2.0-beta.11.tgz",
+      "integrity": "sha1-m+tA3gyIT9fWIxFEWgZmOZOW3pc="
+    },
+    "@intlify/vue-devtools": {
+      "version": "9.2.0-beta.11",
+      "resolved": "http://10.107.103.115:8001/@intlify/vue-devtools/download/@intlify/vue-devtools-9.2.0-beta.11.tgz",
+      "integrity": "sha1-MCJYDdeYQCU4D8cgyLyaVPUJchI=",
+      "requires": {
+        "@intlify/core-base": "9.2.0-beta.11",
+        "@intlify/shared": "9.2.0-beta.11"
+      }
+    },
     "@vitejs/plugin-vue": {
       "version": "1.9.2",
       "resolved": "http://10.107.103.115:8001/@vitejs/plugin-vue/download/@vitejs/plugin-vue-1.9.2.tgz",
@@ -420,6 +462,17 @@
         "@vue/shared": "3.2.19"
       }
     },
+    "vue-i18n": {
+      "version": "9.2.0-beta.11",
+      "resolved": "http://10.107.103.115:8001/vue-i18n/download/vue-i18n-9.2.0-beta.11.tgz",
+      "integrity": "sha1-GZL+2Kagp0GxXOfHM+QVgSbuezo=",
+      "requires": {
+        "@intlify/core-base": "9.2.0-beta.11",
+        "@intlify/shared": "9.2.0-beta.11",
+        "@intlify/vue-devtools": "9.2.0-beta.11",
+        "@vue/devtools-api": "^6.0.0-beta.13"
+      }
+    },
     "vue-router": {
       "version": "4.0.11",
       "resolved": "http://10.107.103.115:8001/vue-router/download/vue-router-4.0.11.tgz",
diff --git a/package.json b/package.json
index 8a18f20..1e0962d 100644
--- a/package.json
+++ b/package.json
@@ -8,6 +8,7 @@
   },
   "dependencies": {
     "vue": "^3.2.13",
+    "vue-i18n": "^9.2.0-beta.11",
     "vue-router": "^4.0.11"
   },
   "devDependencies": {
diff --git a/src/App.vue b/src/App.vue
index f8720df..29efbc4 100644
--- a/src/App.vue
+++ b/src/App.vue
@@ -1,6 +1,16 @@
 <script setup>
 // This starter template is using Vue 3 <script setup> SFCs
 // Check out https://v3.vuejs.org/api/sfc-script-setup.html#sfc-script-setup
+import { ref } from "vue";
+
+// 初始化语言
+const lang = ref(localStorage.getItem('locale'));
+
+// 切换语言
+const switchLang = (lang) => {
+  localStorage.setItem('locale', lang);
+  location.reload();
+}
 </script>
 
 <template>
@@ -12,13 +22,21 @@
       </div>
       <span class="nav-logo-badge">Incubating</span>
       <div class="menu-list">
-        <router-link class="menu-item" to="/">Home</router-link>
-        <router-link class="menu-item" to="/docs">Docs</router-link>
-        <router-link class="menu-item" to="/faq">FAQ</router-link>
-        <router-link class="menu-item" to="/download">Download</router-link>
-        <router-link class="menu-item" to="/blog">Blog</router-link>
-        <router-link class="menu-item" to="/team">Team</router-link>
-        <div class="menu-item">Language</div>
+        <router-link class="menu-item" to="/"><span class="label">Home</span></router-link>
+        <router-link class="menu-item" to="/docs"><span class="label">Docs</span></router-link>
+        <router-link class="menu-item" to="/faq"><span class="label">FAQ</span></router-link>
+        <router-link class="menu-item" to="/download"><span class="label">Download</span></router-link>
+        <router-link class="menu-item" to="/blog"><span class="label">Blog</span></router-link>
+        <router-link class="menu-item" to="/team"><span class="label">Team</span></router-link>
+        <div class="menu-item language">
+          Language
+          <div class="dropdown-menu">
+            <ul class="dropdown-menu-ctn">
+              <li class="dropdown-menu-item" :class="{active: lang === 'zh-CN'}" @click="switchLang('zh-CN')">简体中文</li>
+              <li class="dropdown-menu-item" :class="{active: lang === 'en'}" @click="switchLang('en')">English</li>
+            </ul>
+          </div>
+        </div>
       </div>
     </div>
   </nav>
@@ -90,9 +108,59 @@
       cursor: pointer;
       &:hover,
       &.router-link-exact-active{
-        color: @active-color;
+        .label{
+          color: @active-color;
+        }
         border-color: @active-color;
       }
+      &.language{
+        position: relative;
+        &::after{
+          content: '';
+          display: inline-block;
+          vertical-align: middle;
+          width: 0;
+          height: 0;
+          margin-left: 8px;
+          border-bottom: 6px solid #ccc;
+          border-left: 4px solid transparent;
+          border-right: 4px solid transparent;
+          transition: all ease .2s;
+        }
+        &:hover{
+          &::after{
+            transform: rotate(180deg);
+          }
+          .dropdown-menu{
+            display: block;
+          }
+        }
+        .dropdown-menu{
+          display: none;
+          position: absolute;
+          z-index: 10;
+          top: 20px;
+          left: 0;
+          padding-top: 40px;
+          .dropdown-menu-ctn{
+            padding: 10px 0;
+            background: #fff;
+            border-radius: 4px;
+            border: 1px solid #FFFFFF;
+            box-shadow: 0 2px 12px 0 rgba(15,18,34,0.10);
+            .dropdown-menu-item{
+              font-size: 14px;
+              line-height: 32px;
+              padding: 0 16px;
+              cursor: pointer;
+              &.active,
+              &:hover{
+                color: @active-color;
+              }
+            }
+          }
+        }
+      }
     }
   }
 }
diff --git a/src/i18n/en.json b/src/i18n/en.json
new file mode 100644
index 0000000..83add9c
--- /dev/null
+++ b/src/i18n/en.json
@@ -0,0 +1,10 @@
+{
+  "message": {
+    "common": {},
+    "home": {
+      "banner": {
+        "slogan": "Decouple the upper applications and the underlying data engines by building a computation middleware layer."
+      }
+    }
+  }
+}
\ No newline at end of file
diff --git a/src/i18n/index.js b/src/i18n/index.js
new file mode 100644
index 0000000..46092cd
--- /dev/null
+++ b/src/i18n/index.js
@@ -0,0 +1,48 @@
+/*
+ * Copyright 2019 WeBank
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ *  you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+import { createI18n } from 'vue-i18n'
+import en from './en.json';
+import zh from './zh.json';
+
+// 先判断是否有设置语言,没有就用本地语言
+let lang = 'en';
+const locale = localStorage.getItem('locale');
+if (locale) {
+  lang = locale;
+} else {
+  if (navigator.language === 'zh-CN') {
+    lang = 'zh-CN';
+    localStorage.setItem('locale', 'zh-CN');
+  } else {
+    lang = 'en';
+    localStorage.setItem('locale', 'en');
+  }
+}
+
+const messages = {
+  'en': en,
+  'zh-CN': zh
+};
+
+const i18n = createI18n({
+  locale: lang, // set locale
+  fallbackLocale: 'en', // set fallback locale
+  messages, // set locale messages
+})
+
+export default i18n;
\ No newline at end of file
diff --git a/src/i18n/zh.json b/src/i18n/zh.json
new file mode 100644
index 0000000..844b8b7
--- /dev/null
+++ b/src/i18n/zh.json
@@ -0,0 +1,10 @@
+{
+  "message": {
+    "common": {},
+    "home": {
+      "banner": {
+        "slogan": "中文的Decouple the upper applications and the underlying data engines by building a computation middleware layer."
+      }
+    }
+  }
+}
\ No newline at end of file
diff --git a/src/main.js b/src/main.js
index c8d6c67..1772dd3 100644
--- a/src/main.js
+++ b/src/main.js
@@ -1,7 +1,8 @@
 import { createApp } from 'vue'
 import { createRouter, createWebHashHistory } from 'vue-router'
 import routes from './router';
-import App from './App.vue'
+import App from './App.vue';
+import i18n from './i18n';
 
 const router = createRouter({
   history: createWebHashHistory(),
@@ -14,5 +15,6 @@ router.resolve({
 
 const app = createApp(App);
 app.use(router);
+app.use(i18n);
 
 app.mount('#app')
diff --git a/src/pages/home.vue b/src/pages/home.vue
index d5075ec..311f685 100644
--- a/src/pages/home.vue
+++ b/src/pages/home.vue
@@ -2,7 +2,7 @@
   <div class="ctn-block home-page">
     <div class="banner text-center">
       <h1 class="home-title"><span class="apache">Apache</span> <span class="linkis">Linkis</span> <span class="badge">Incubating</span></h1>
-      <p class="home-desc">Decouple the upper applications and the underlying data<br>engines by building a middleware layer.</p>
+      <p class="home-desc">{{$t('message.home.banner.slogan')}}</p>
       <div class="botton-row center">
         <a href="/" class="corner-botton black">Get Started</a>
         <a href="/" class="corner-botton white">GitHub</a>

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 03/50: ADD FOOTER

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit fa8bab824f7be62c5869ca46007fc85534977055
Author: lucaszhu <lu...@webank.com>
AuthorDate: Wed Sep 29 11:21:09 2021 +0800

    ADD FOOTER
---
 src/App.vue         | 64 ++++++++++++++++++++++++++++++++++++++++++++++++++---
 src/style/base.less |  6 ++++-
 2 files changed, 66 insertions(+), 4 deletions(-)

diff --git a/src/App.vue b/src/App.vue
index ca52ac6..135a713 100644
--- a/src/App.vue
+++ b/src/App.vue
@@ -1,11 +1,11 @@
 <script setup>
 // This starter template is using Vue 3 <script setup> SFCs
 // Check out https://v3.vuejs.org/api/sfc-script-setup.html#sfc-script-setup
-import HelloWorld from './components/HelloWorld.vue'
 </script>
 
 <template>
-  <div class="nav">
+<div>
+  <nav class="nav">
     <div class="ctn-block">
       <div class="nav-logo">
         Apache Linkis
@@ -21,7 +21,36 @@ import HelloWorld from './components/HelloWorld.vue'
         <div class="menu-item">Language</div>
       </div>
     </div>
-  </div>
+  </nav>
+  <router-view/>
+  <footer class="footer">
+    <div class="ctn-block">
+      <div class="footer-links-row">
+        <div class="footer-links">
+          <h3 class="links-title">Linkis</h3>
+          <a href="" class="links-item">Documentation</a>
+          <a href="" class="links-item">Events</a>
+          <a href="" class="links-item">Releases</a>
+        </div>
+        <div class="footer-links">
+          <h3 class="links-title">Community</h3>
+          <a href="" class="links-item">GitHub</a>
+          <a href="" class="links-item">Issue Tracker</a>
+          <a href="" class="links-item">Pull Requests</a>
+        </div>
+        <div class="footer-links">
+          <h3 class="links-title">Apache Software Foundation</h3>
+          <a href="" class="links-item">Foundation</a>
+          <a href="" class="links-item">License</a>
+          <a href="" class="links-item">Sponsorship</a>
+          <a href="" class="links-item">Thanks</a>
+        </div>
+      </div>
+      <p class="footer-desc">Apache Linkis (Incubating) is an effort undergoing incubation at The Apache Software Foundation, sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code [...]
+      <p class="footer-desc text-center">Copyright © 2021 The Apache Software Foundation. Apache Linkis, Apache Incubator, Linkis, Apache, the Apache feather logo, the Apache<br>Linkis logo and the Apache Incubator project logo are trademarks of The Apache Software Foundation.</p>
+    </div>
+  </footer>
+</div>
 </template>
 
 <style lang="less">
@@ -51,6 +80,7 @@ import HelloWorld from './components/HelloWorld.vue'
   .menu-list{
     flex: 1;
     display: flex;
+    justify-content: flex-end;
     .menu-item{
       margin-left: 16px;
       margin-right: 16px;
@@ -66,4 +96,32 @@ import HelloWorld from './components/HelloWorld.vue'
     }
   }
 }
+.footer{
+  padding-top: 40px;
+  background: #F9FAFB;
+  .footer-desc{
+    padding: 0 20px 30px;
+    color: #999999;
+    font-weight: 400;
+  }
+  .footer-links-row{
+    display: flex;
+    font-size: 16px;
+    .footer-links{
+      flex: 1;
+      padding: 20px;
+      .links-title{
+        margin-bottom: 16px;
+      }
+      .links-item{
+        display: block;
+        margin-bottom: 10px;
+        color: rgba(15,18,34,0.65);
+        &:hover{
+          text-decoration: underline;
+        }
+      }
+    }
+  }
+}
 </style>
diff --git a/src/style/base.less b/src/style/base.less
index ca48f5c..7f9d360 100644
--- a/src/style/base.less
+++ b/src/style/base.less
@@ -40,4 +40,8 @@ a {
   width: 1200px;
   padding: 0 20px;
   margin: 0 auto;
-}
\ No newline at end of file
+}
+
+.text-center {
+  text-align: center;
+}

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 15/50: ADD: 增加图片

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 7ab959ce278809e3a8ef0f547cda4d522caca007
Author: lucaszhu <lu...@webank.com>
AuthorDate: Mon Oct 11 15:57:50 2021 +0800

    ADD: 增加图片
---
 src/assets/docs/deploy/Linkis1.0_combined_eureka.png | Bin 0 -> 134418 bytes
 src/docs/deploy/linkis_zh.md                         |   2 +-
 2 files changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/assets/docs/deploy/Linkis1.0_combined_eureka.png b/src/assets/docs/deploy/Linkis1.0_combined_eureka.png
new file mode 100644
index 0000000..809dbee
Binary files /dev/null and b/src/assets/docs/deploy/Linkis1.0_combined_eureka.png differ
diff --git a/src/docs/deploy/linkis_zh.md b/src/docs/deploy/linkis_zh.md
index 523ac90..e1c1fb6 100644
--- a/src/docs/deploy/linkis_zh.md
+++ b/src/docs/deploy/linkis_zh.md
@@ -249,7 +249,7 @@ Linkis1.0 默认已适配的引擎列表如下:
 
   默认会启动8个Linkis微服务,其中图下linkis-cg-engineconn服务为运行任务才会启动
    
-![Linkis1.0_Eureka](../Images/deployment/Linkis1.0_combined_eureka.png)
+![Linkis1.0_Eureka](../../assets/docs/deploy/Linkis1.0_combined_eureka.png)
 
 #### (3)、查看服务是否正常
 1. 服务启动成功后您可以通过,安装前端管理台,来检验服务的正常性,[点击跳转管理台安装文档](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Deployment_Documents/%E5%89%8D%E7%AB%AF%E7%AE%A1%E7%90%86%E5%8F%B0%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3.md)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 33/50: UPDATE: 优化our users的样式

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit a55015337b4a70ef4924cca5cb74c63579fd3fd4
Author: lucaszhu <lu...@webank.com>
AuthorDate: Mon Oct 18 14:30:39 2021 +0800

    UPDATE: 优化our users的样式
---
 src/pages/home/index.vue | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/src/pages/home/index.vue b/src/pages/home/index.vue
index a619193..2b942f9 100644
--- a/src/pages/home/index.vue
+++ b/src/pages/home/index.vue
@@ -151,7 +151,12 @@
         background: #FFFFFF;
         box-shadow: 0 1px 20px 0 rgba(15,18,34,0.10);
         border-radius: 8px;
-        align-content: center
+        align-items: center;
+        justify-content: center;
+        > img {
+          max-width: 90%;
+          max-height: 90%;
+        }
       }
     }
     .features{

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 16/50: add some docs and faq

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit ef901667dfc25db2e989814ea3e382fe36c4bbc5
Author: casionone <ca...@gmail.com>
AuthorDate: Mon Oct 11 20:39:54 2021 +0800

    add some docs and faq
---
 Linkis-Doc-master/LANGS.md                         |   2 +
 Linkis-Doc-master/README.md                        | 114 ++++++
 Linkis-Doc-master/README_CN.md                     | 105 ++++++
 .../en_US/API_Documentations/JDBC_API_Document.md  |  45 +++
 ...sk_submission_and_execution_RestAPI_document.md | 170 +++++++++
 .../en_US/API_Documentations/Login_API.md          | 125 +++++++
 .../en_US/API_Documentations/README.md             |   8 +
 .../EngineConn/README.md                           |  99 +++++
 .../EngineConnManager/Images/ECM-01.png            | Bin 0 -> 34340 bytes
 .../EngineConnManager/Images/ECM-02.png            | Bin 0 -> 25340 bytes
 .../EngineConnManager/README.md                    |  45 +++
 .../EngineConnPlugin/README.md                     |  68 ++++
 .../LinkisManager/AppManager.md                    |  33 ++
 .../LinkisManager/LabelManager.md                  |  38 ++
 .../LinkisManager/README.md                        |  41 +++
 .../LinkisManager/ResourceManager.md               | 132 +++++++
 .../Computation_Governance_Services/README.md      |  40 +++
 .../DifferenceBetween1.0&0.x.md                    |  50 +++
 .../How_to_add_an_EngineConn.md                    | 105 ++++++
 ...submission_preparation_and_execution_process.md | 138 +++++++
 .../Microservice_Governance_Services/Gateway.md    |  34 ++
 .../Microservice_Governance_Services/README.md     |  32 ++
 .../Public_Enhancement_Services/BML.md             |  93 +++++
 .../ContextService/ContextService_Cache.md         |  95 +++++
 .../ContextService/ContextService_Client.md        |  61 ++++
 .../ContextService/ContextService_HighAvailable.md |  86 +++++
 .../ContextService/ContextService_Listener.md      |  33 ++
 .../ContextService/ContextService_Persistence.md   |   8 +
 .../ContextService/ContextService_Search.md        | 127 +++++++
 .../ContextService/ContextService_Service.md       |  53 +++
 .../ContextService/README.md                       | 123 +++++++
 .../Public_Enhancement_Services/PublicService.md   |  34 ++
 .../Public_Enhancement_Services/README.md          |  91 +++++
 .../en_US/Architecture_Documents/README.md         |  18 +
 .../Deployment_Documents/Cluster_Deployment.md     |  98 +++++
 .../EngineConnPlugin_installation_document.md      |  82 +++++
 ...75\262\345\276\256\346\234\215\345\212\241.png" | Bin 0 -> 130148 bytes
 .../Installation_Hierarchical_Structure.md         | 198 ++++++++++
 .../Deployment_Documents/Quick_Deploy_Linkis1.0.md | 246 +++++++++++++
 .../en_US/Development_Documents/Contributing.md    | 195 ++++++++++
 .../Development_Specification/API.md               | 143 ++++++++
 .../Development_Specification/Concurrent.md        |  17 +
 .../Development_Specification/Exception_Catch.md   |   9 +
 .../Development_Specification/Exception_Throws.md  |  52 +++
 .../Development_Specification/Log.md               |  13 +
 .../Development_Specification/Path_Usage.md        |  15 +
 .../Development_Specification/README.md            |   9 +
 .../Linkis_Compilation_Document.md                 | 135 +++++++
 .../Linkis_Compile_and_Package.md                  | 155 ++++++++
 .../en_US/Development_Documents/Linkis_DEBUG.md    | 141 ++++++++
 .../New_EngineConn_Development.md                  |  77 ++++
 .../Hive_User_Manual.md                            |  81 +++++
 .../JDBC_User_Manual.md                            |  53 +++
 .../Python_User_Manual.md                          |  61 ++++
 .../en_US/Engine_Usage_Documentations/README.md    |  25 ++
 .../Shell_User_Manual.md                           |  55 +++
 .../Spark_User_Manual.md                           |  91 +++++
 .../add_an_EngineConn_flow_chart.png               | Bin 0 -> 59893 bytes
 .../Architecture/EngineConn/engineconn-01.png      | Bin 0 -> 157753 bytes
 .../Gateway/gateway_server_dispatcher.png          | Bin 0 -> 83743 bytes
 .../Architecture/Gateway/gateway_server_global.png | Bin 0 -> 85272 bytes
 .../Architecture/Gateway/gatway_websocket.png      | Bin 0 -> 37769 bytes
 .../execution.png                                  | Bin 0 -> 31078 bytes
 .../orchestrate.png                                | Bin 0 -> 31095 bytes
 .../overall.png                                    | Bin 0 -> 231192 bytes
 .../physical_tree.png                              | Bin 0 -> 79471 bytes
 .../result_acquisition.png                         | Bin 0 -> 41007 bytes
 .../submission.png                                 | Bin 0 -> 12946 bytes
 .../LabelManager/label_manager_builder.png         | Bin 0 -> 62978 bytes
 .../LabelManager/label_manager_global.png          | Bin 0 -> 14988 bytes
 .../LabelManager/label_manager_scorer.png          | Bin 0 -> 72977 bytes
 .../Linkis0.X-NewEngine-architecture.png           | Bin 0 -> 244826 bytes
 .../Architecture/Linkis0.X-services-list.png       | Bin 0 -> 66821 bytes
 .../Linkis1.0-EngineConn-architecture.png          | Bin 0 -> 157753 bytes
 .../Linkis1.0-NewEngine-architecture.png           | Bin 0 -> 26523 bytes
 .../Images/Architecture/Linkis1.0-architecture.png | Bin 0 -> 212362 bytes
 .../Linkis1.0-newEngine-initialization.png         | Bin 0 -> 48313 bytes
 .../Architecture/Linkis1.0-services-list.png       | Bin 0 -> 85890 bytes
 .../Architecture/PublicEnhencementArchitecture.png | Bin 0 -> 47158 bytes
 .../ContextService/linkis-contextservice-01.png    | Bin 0 -> 22692 bytes
 .../ContextService/linkis-contextservice-02.png    | Bin 0 -> 10655 bytes
 .../linkis-contextservice-cache-01.png             | Bin 0 -> 11881 bytes
 .../linkis-contextservice-cache-02.png             | Bin 0 -> 23902 bytes
 .../linkis-contextservice-cache-03.png             | Bin 0 -> 109334 bytes
 .../linkis-contextservice-cache-04.png             | Bin 0 -> 36161 bytes
 .../linkis-contextservice-cache-05.png             | Bin 0 -> 2265 bytes
 .../linkis-contextservice-client-01.png            | Bin 0 -> 54438 bytes
 .../linkis-contextservice-client-02.png            | Bin 0 -> 93036 bytes
 .../linkis-contextservice-client-03.png            | Bin 0 -> 34839 bytes
 .../ContextService/linkis-contextservice-ha-01.png | Bin 0 -> 38439 bytes
 .../ContextService/linkis-contextservice-ha-02.png | Bin 0 -> 21982 bytes
 .../ContextService/linkis-contextservice-ha-03.png | Bin 0 -> 91788 bytes
 .../ContextService/linkis-contextservice-ha-04.png | Bin 0 -> 40733 bytes
 .../linkis-contextservice-listener-01.png          | Bin 0 -> 24414 bytes
 .../linkis-contextservice-listener-02.png          | Bin 0 -> 46152 bytes
 .../linkis-contextservice-listener-03.png          | Bin 0 -> 32597 bytes
 .../linkis-contextservice-persistence-01.png       | Bin 0 -> 198797 bytes
 .../linkis-contextservice-search-01.png            | Bin 0 -> 33731 bytes
 .../linkis-contextservice-search-02.png            | Bin 0 -> 26768 bytes
 .../linkis-contextservice-search-03.png            | Bin 0 -> 33312 bytes
 .../linkis-contextservice-search-04.png            | Bin 0 -> 25192 bytes
 .../linkis-contextservice-search-05.png            | Bin 0 -> 24757 bytes
 .../linkis-contextservice-search-06.png            | Bin 0 -> 29923 bytes
 .../linkis-contextservice-search-07.png            | Bin 0 -> 30013 bytes
 .../linkis-contextservice-service-01.png           | Bin 0 -> 56235 bytes
 .../linkis-contextservice-service-02.png           | Bin 0 -> 73463 bytes
 .../linkis-contextservice-service-03.png           | Bin 0 -> 23477 bytes
 .../linkis-contextservice-service-04.png           | Bin 0 -> 27387 bytes
 .../en_US/Images/Architecture/bml-02.png           | Bin 0 -> 55227 bytes
 .../Architecture/linkis-engineConnPlugin-01.png    | Bin 0 -> 21864 bytes
 .../en_US/Images/Architecture/linkis-intro-01.png  | Bin 0 -> 413878 bytes
 .../en_US/Images/Architecture/linkis-intro-02.png  | Bin 0 -> 355186 bytes
 .../Architecture/linkis-microservice-gov-01.png    | Bin 0 -> 109909 bytes
 .../Architecture/linkis-microservice-gov-03.png    | Bin 0 -> 83457 bytes
 .../Architecture/linkis-publicService-01.png       | Bin 0 -> 62443 bytes
 .../en_US/Images/EngineUsage/hive-config.png       | Bin 0 -> 86864 bytes
 .../en_US/Images/EngineUsage/hive-run.png          | Bin 0 -> 94294 bytes
 .../en_US/Images/EngineUsage/jdbc-conf.png         | Bin 0 -> 91609 bytes
 .../en_US/Images/EngineUsage/jdbc-run.png          | Bin 0 -> 56438 bytes
 .../en_US/Images/EngineUsage/pyspakr-run.png       | Bin 0 -> 124979 bytes
 .../en_US/Images/EngineUsage/python-config.png     | Bin 0 -> 92997 bytes
 .../en_US/Images/EngineUsage/python-run.png        | Bin 0 -> 89641 bytes
 .../en_US/Images/EngineUsage/queue-set.png         | Bin 0 -> 93935 bytes
 .../en_US/Images/EngineUsage/scala-run.png         | Bin 0 -> 125060 bytes
 .../en_US/Images/EngineUsage/shell-run.png         | Bin 0 -> 209553 bytes
 .../en_US/Images/EngineUsage/spark-conf.png        | Bin 0 -> 99930 bytes
 .../en_US/Images/EngineUsage/sparksql-run.png      | Bin 0 -> 121699 bytes
 .../en_US/Images/EngineUsage/workflow.png          | Bin 0 -> 151481 bytes
 .../en_US/Images/Linkis_1.0_architecture.png       | Bin 0 -> 316746 bytes
 .../Images/Tuning_and_Troubleshooting/Q&A.png      | Bin 0 -> 161638 bytes
 .../Tuning_and_Troubleshooting/code-fix-01.png     | Bin 0 -> 199523 bytes
 .../Tuning_and_Troubleshooting/db-config-01.png    | Bin 0 -> 391789 bytes
 .../Tuning_and_Troubleshooting/db-config-02.png    | Bin 0 -> 60334 bytes
 .../Images/Tuning_and_Troubleshooting/debug-01.png | Bin 0 -> 6168 bytes
 .../Images/Tuning_and_Troubleshooting/debug-02.png | Bin 0 -> 62496 bytes
 .../Images/Tuning_and_Troubleshooting/debug-03.png | Bin 0 -> 32875 bytes
 .../Images/Tuning_and_Troubleshooting/debug-04.png | Bin 0 -> 111758 bytes
 .../Images/Tuning_and_Troubleshooting/debug-05.png | Bin 0 -> 52040 bytes
 .../Images/Tuning_and_Troubleshooting/debug-06.png | Bin 0 -> 63668 bytes
 .../Images/Tuning_and_Troubleshooting/debug-07.png | Bin 0 -> 316176 bytes
 .../Images/Tuning_and_Troubleshooting/debug-08.png | Bin 0 -> 27722 bytes
 .../Tuning_and_Troubleshooting/hive-config-01.png  | Bin 0 -> 76327 bytes
 .../linkis-exception-01.png                        | Bin 0 -> 1199628 bytes
 .../linkis-exception-02.png                        | Bin 0 -> 1366293 bytes
 .../linkis-exception-03.png                        | Bin 0 -> 646836 bytes
 .../linkis-exception-04.png                        | Bin 0 -> 2965676 bytes
 .../linkis-exception-05.png                        | Bin 0 -> 454949 bytes
 .../linkis-exception-06.png                        | Bin 0 -> 869492 bytes
 .../linkis-exception-07.png                        | Bin 0 -> 2249882 bytes
 .../linkis-exception-08.png                        | Bin 0 -> 1191728 bytes
 .../linkis-exception-09.png                        | Bin 0 -> 1008341 bytes
 .../linkis-exception-10.png                        | Bin 0 -> 322110 bytes
 .../Tuning_and_Troubleshooting/page-show-01.png    | Bin 0 -> 115010 bytes
 .../Tuning_and_Troubleshooting/page-show-02.png    | Bin 0 -> 576911 bytes
 .../Tuning_and_Troubleshooting/page-show-03.png    | Bin 0 -> 654609 bytes
 .../searching_keywords.png                         | Bin 0 -> 102094 bytes
 .../Tuning_and_Troubleshooting/shell-error-01.png  | Bin 0 -> 74682 bytes
 .../Tuning_and_Troubleshooting/shell-error-02.png  | Bin 0 -> 330735 bytes
 .../Tuning_and_Troubleshooting/shell-error-03.png  | Bin 0 -> 1624375 bytes
 .../Tuning_and_Troubleshooting/shell-error-04.png  | Bin 0 -> 803920 bytes
 .../Tuning_and_Troubleshooting/shell-error-05.png  | Bin 0 -> 179543 bytes
 .../Tunning_And_Troubleshooting/debug-01.png       | Bin 0 -> 6168 bytes
 .../Tunning_And_Troubleshooting/debug-02.png       | Bin 0 -> 62496 bytes
 .../Tunning_And_Troubleshooting/debug-03.png       | Bin 0 -> 32875 bytes
 .../Tunning_And_Troubleshooting/debug-04.png       | Bin 0 -> 111758 bytes
 .../Tunning_And_Troubleshooting/debug-05.png       | Bin 0 -> 52040 bytes
 .../Tunning_And_Troubleshooting/debug-06.png       | Bin 0 -> 63668 bytes
 .../Tunning_And_Troubleshooting/debug-07.png       | Bin 0 -> 316176 bytes
 .../Tunning_And_Troubleshooting/debug-08.png       | Bin 0 -> 27722 bytes
 .../deployment/Linkis1.0_combined_eureka.png       | Bin 0 -> 134418 bytes
 .../en_US/Images/wedatasphere_contact_01.png       | Bin 0 -> 217762 bytes
 .../en_US/Images/wedatasphere_stack_Linkis.png     | Bin 0 -> 203466 bytes
 .../Tuning_and_Troubleshooting/Configuration.md    | 217 +++++++++++
 .../en_US/Tuning_and_Troubleshooting/Q&A.md        | 255 +++++++++++++
 .../en_US/Tuning_and_Troubleshooting/README.md     |  98 +++++
 .../en_US/Tuning_and_Troubleshooting/Tuning.md     |  61 ++++
 .../Linkis_Upgrade_from_0.x_to_1.0_guide.md        |  73 ++++
 .../en_US/Upgrade_Documents/README.md              |   5 +
 .../en_US/User_Manual/How_To_Use_Linkis.md         |  29 ++
 .../en_US/User_Manual/Linkis1.0_User_Manual.md     | 400 +++++++++++++++++++++
 .../en_US/User_Manual/LinkisCli_Usage_document.md  | 191 ++++++++++
 .../User_Manual/Linkis_Console_User_Manual.md      | 120 +++++++
 Linkis-Doc-master/en_US/User_Manual/README.md      |   8 +
 ...\350\241\214RestAPI\346\226\207\346\241\243.md" | 171 +++++++++
 .../zh_CN/API_Documentations/Login_API.md          | 131 +++++++
 .../zh_CN/API_Documentations/README.md             |   8 +
 ...350\241\214JDBC_API\346\226\207\346\241\243.md" |  46 +++
 .../Commons/messagescheduler.md                    |  15 +
 .../zh_CN/Architecture_Documents/Commons/rpc.md    |  17 +
 .../EngineConn/README.md                           |  98 +++++
 .../ECM\346\236\266\346\236\204\345\233\276.png"   | Bin 0 -> 34340 bytes
 ...57\267\346\261\202\346\265\201\347\250\213.png" | Bin 0 -> 25340 bytes
 .../EngineConnManager/README.md                    |  49 +++
 .../EngineConnPlugin/README.md                     |  71 ++++
 .../Entrance/Entrance.md                           |  26 ++
 .../LinkisClient/README.md                         |  35 ++
 .../LinkisManager/AppManager.md                    |  45 +++
 .../LinkisManager/LabelManager.md                  |  40 +++
 .../LinkisManager/README.md                        |  74 ++++
 .../LinkisManager/ResourceManager.md               | 145 ++++++++
 .../Computation_Governance_Services/README.md      |  66 ++++
 ...226\260\345\242\236\346\265\201\347\250\213.md" | 111 ++++++
 ...211\247\350\241\214\346\265\201\347\250\213.md" | 165 +++++++++
 ...214\272\345\210\253\347\256\200\350\277\260.md" |  98 +++++
 .../Microservice_Governance_Services/Gateway.md    |  30 ++
 .../Microservice_Governance_Services/README.md     |  23 ++
 .../Computation_Orchestrator_architecture.md       |  18 +
 ...16\245\345\217\243\345\222\214\347\261\273.png" | Bin 0 -> 27266 bytes
 ...72\244\344\272\222\346\265\201\347\250\213.png" | Bin 0 -> 30134 bytes
 ...16\245\345\217\243\345\222\214\347\261\273.png" | Bin 0 -> 162100 bytes
 .../Orchestrator/Orchestrator_CheckRuler.md        |  27 ++
 .../Orchestrator/Orchestrator_ECMP_architecture.md |  32 ++
 .../Orchestrator_Execution_architecture_doc.md     |  19 +
 .../Orchestrator_Operation_architecture_doc.md     |  26 ++
 .../Orchestrator_Reheater_architecture.md          |  12 +
 .../Orchestrator_Transform_architecture.md         |  12 +
 .../Orchestrator/Orchestrator_architecture_doc.md  | 113 ++++++
 .../Architecture_Documents/Orchestrator/README.md  |  55 +++
 .../Public_Enhancement_Services/BML.md             |  94 +++++
 .../ContextService/ContextService_Cache.md         |  95 +++++
 .../ContextService/ContextService_Client.md        |  61 ++++
 .../ContextService/ContextService_HighAvailable.md |  86 +++++
 .../ContextService/ContextService_Listener.md      |  33 ++
 .../ContextService/ContextService_Persistence.md   |   8 +
 .../ContextService/ContextService_Search.md        | 127 +++++++
 .../ContextService/ContextService_Service.md       |  55 +++
 .../ContextService/README.md                       | 124 +++++++
 .../Public_Enhancement_Services/DataSource.md      |   1 +
 .../Public_Enhancement_Services/PublicService.md   |  31 ++
 .../Public_Enhancement_Services/README.md          |  91 +++++
 .../zh_CN/Architecture_Documents/README.md         |  24 ++
 .../Deployment_Documents/Cluster_Deployment.md     | 100 ++++++
 ...256\211\350\243\205\346\226\207\346\241\243.md" | 106 ++++++
 ...75\262\345\276\256\346\234\215\345\212\241.png" | Bin 0 -> 130148 bytes
 .../Installation_Hierarchical_Structure.md         | 186 ++++++++++
 .../zh_CN/Deployment_Documents/README.md           |   1 +
 ...256\211\350\243\205\346\226\207\346\241\243.md" | 110 ++++++
 ...51\200\237\351\203\250\347\275\262Linkis1.0.md" | 256 +++++++++++++
 .../zh_CN/Development_Documents/Contributing.md    | 206 +++++++++++
 .../zh_CN/Development_Documents/DEBUG_LINKIS.md    | 113 ++++++
 .../Development_Specification/API.md               |  72 ++++
 .../Development_Specification/Concurrent.md        |   9 +
 .../Development_Specification/Exception_Catch.md   |   9 +
 .../Development_Specification/Exception_Throws.md  |  30 ++
 .../Development_Specification/Log.md               |  13 +
 .../Development_Specification/Path_Usage.md        |   8 +
 .../Development_Specification/README.md            |  12 +
 ...274\226\350\257\221\346\226\207\346\241\243.md" | 160 +++++++++
 .../New_EngineConn_Development.md                  |  79 ++++
 .../zh_CN/Development_Documents/README.md          |   1 +
 .../zh_CN/Development_Documents/Web/Build.md       |  84 +++++
 .../zh_CN/Development_MEETUP/Phase_One/README.md   |  56 +++
 .../zh_CN/Development_MEETUP/Phase_One/chapter1.md |   1 +
 .../zh_CN/Development_MEETUP/Phase_One/chapter2.md |   1 +
 .../Development_MEETUP/Phase_Two/Images/Q&A.png    | Bin 0 -> 161638 bytes
 .../Development_MEETUP/Phase_Two/Images/issue.png  | Bin 0 -> 102094 bytes
 .../Phase_Two/Images/\345\217\214\346\264\273.png" | Bin 0 -> 130148 bytes
 .../Images2/0ca28635de253f245743fbf0a7cfe165.png   | Bin 0 -> 98316 bytes
 .../Images2/146a58addcacbc560a33604b00636dee.png   | Bin 0 -> 44890 bytes
 .../Images2/1730acb1c4ff58a055fa71324e5c7f2c.png   | Bin 0 -> 95491 bytes
 .../Images2/1d31b398318acbd862f20ac05decbce9.png   | Bin 0 -> 7741 bytes
 .../Images2/1d8f043dae5afdf07371ad31b06bad6e.png   | Bin 0 -> 74243 bytes
 .../Images2/232983a712a949196159f0aeab7de7f5.png   | Bin 0 -> 150575 bytes
 .../Images2/2767bac623d10bf45033cf9fdd8d197f.png   | Bin 0 -> 120905 bytes
 .../Images2/335dabbf46b5af11e494cdd1be2c32a1.png   | Bin 0 -> 118394 bytes
 .../Images2/491e9a0fbd5b0121f228e0f7938cf168.png   | Bin 0 -> 120419 bytes
 .../Images2/781914abed8ec4955cac520eb0a1be7e.png   | Bin 0 -> 770399 bytes
 .../Images2/7b8685204636771776605bab99b08e8f.png   | Bin 0 -> 82550 bytes
 .../Images2/7cbe7cd81ce2212883741dd9b62dad18.png   | Bin 0 -> 36588 bytes
 .../Images2/8576fe8054c072a7fee53d98eeefa004.png   | Bin 0 -> 39623 bytes
 .../Images2/87ef54ccaa6b96abc30e612636bb2e90.png   | Bin 0 -> 103943 bytes
 .../Images2/9693ded0c6a9c32cb1ff33713e5d3864.png   | Bin 0 -> 54885 bytes
 .../Images2/9c254ec33125eb0ab50a6bcc0e95a18a.png   | Bin 0 -> 145675 bytes
 .../Images2/a0fb7e3474dff5c22fb3c230f73fa6f6.png   | Bin 0 -> 55052 bytes
 .../Images2/b68f441d7ac6b4814c048d35cebbb25d.png   | Bin 0 -> 117177 bytes
 .../Images2/b7feb36a0322b002f9f85f0a8003dcc1.png   | Bin 0 -> 169905 bytes
 .../Images2/ba90e28a78375103c4890cd448818ab3.png   | Bin 0 -> 132653 bytes
 .../Images2/c3f5ac1723ba9823084f529f5384440d.png   | Bin 0 -> 21078 bytes
 .../Images2/cd3ea323b238158c8a3de8acc8ec0a3f.png   | Bin 0 -> 20051 bytes
 .../Images2/d0fe37b4aa34b0cea9e87247b7b17943.png   | Bin 0 -> 115496 bytes
 .../Images2/d1b4759745056add53a32a76d3699109.png   | Bin 0 -> 23378 bytes
 .../Images2/d9bab9306cc28ecdf8d3679ecfc224d4.png   | Bin 0 -> 97351 bytes
 .../Images2/da0cf9cb7b27dac266435b5f6ad1cd82.png   | Bin 0 -> 45877 bytes
 .../Images2/de301f8f21c1735c5e018188d685ad74.png   | Bin 0 -> 53369 bytes
 .../Images2/e7e2a98ce1f03d228c7c2d782b076d53.png   | Bin 0 -> 81483 bytes
 .../Images2/f395c9cc338d85e258485658290bf365.png   | Bin 0 -> 43688 bytes
 .../Images2/f6fa083cab060a5adc9d483b37d040f5.png   | Bin 0 -> 60331 bytes
 .../Images2/fb952c266ce9a8db9b9036a602e222a7.png   | Bin 0 -> 131953 bytes
 .../zh_CN/Development_MEETUP/Phase_Two/README.md   |  58 +++
 .../zh_CN/Development_MEETUP/Phase_Two/chapter1.md | 371 +++++++++++++++++++
 .../zh_CN/Development_MEETUP/Phase_Two/chapter2.md | 251 +++++++++++++
 .../zh_CN/Development_MEETUP/README.md             |   1 +
 .../ElasticSearch_User_Manual.md                   |   1 +
 .../Hive_User_Manual.md                            |  81 +++++
 .../JDBC_User_Manual.md                            |  53 +++
 .../MLSQL_User_Manual.md                           |   1 +
 .../Presto_User_Manual.md                          |   1 +
 .../Python_User_Manual.md                          |  61 ++++
 .../zh_CN/Engine_Usage_Documentations/README.md    |  25 ++
 .../Shell_User_Manual.md                           |  57 +++
 .../Spark_User_Manual.md                           |  91 +++++
 .../zh_CN/Images/Architecture/AppManager-02.png    | Bin 0 -> 701283 bytes
 .../zh_CN/Images/Architecture/AppManager-03.png    | Bin 0 -> 69489 bytes
 .../Commons/linkis-message-scheduler.png           | Bin 0 -> 26987 bytes
 .../Images/Architecture/Commons/linkis-rpc.png     | Bin 0 -> 23403 bytes
 .../Architecture/EngineConn/engineconn-01.png      | Bin 0 -> 157753 bytes
 .../EngineConnPlugin/engine_conn_plugin_cycle.png  | Bin 0 -> 49326 bytes
 .../EngineConnPlugin/engine_conn_plugin_global.png | Bin 0 -> 32292 bytes
 .../EngineConnPlugin/engine_conn_plugin_load.png   | Bin 0 -> 74821 bytes
 ...26\260\345\242\236\346\265\201\347\250\213.png" | Bin 0 -> 59893 bytes
 .../Gateway/gateway_server_dispatcher.png          | Bin 0 -> 83743 bytes
 .../Architecture/Gateway/gateway_server_global.png | Bin 0 -> 85272 bytes
 .../Architecture/Gateway/gatway_websocket.png      | Bin 0 -> 37769 bytes
 .../Physical\346\240\221.png"                      | Bin 0 -> 79471 bytes
 ...56\265\346\265\201\347\250\213\345\233\276.png" | Bin 0 -> 31078 bytes
 ...56\265\346\265\201\347\250\213\345\233\276.png" | Bin 0 -> 12946 bytes
 ...16\267\345\217\226\346\265\201\347\250\213.png" | Bin 0 -> 41007 bytes
 ...16\222\346\265\201\347\250\213\345\233\276.png" | Bin 0 -> 31095 bytes
 ...75\223\346\265\201\347\250\213\345\233\276.png" | Bin 0 -> 231192 bytes
 .../LabelManager/label_manager_builder.png         | Bin 0 -> 62978 bytes
 .../LabelManager/label_manager_global.png          | Bin 0 -> 14988 bytes
 .../LabelManager/label_manager_scorer.png          | Bin 0 -> 72977 bytes
 .../Images/Architecture/Linkis1.0-architecture.png | Bin 0 -> 221751 bytes
 .../Architecture/LinkisManager/AppManager-01.png   | Bin 0 -> 69489 bytes
 .../Architecture/LinkisManager/LabelManager-01.png | Bin 0 -> 39221 bytes
 .../LinkisManager/LinkisManager-01.png             | Bin 0 -> 183082 bytes
 .../LinkisManager/ResourceManager-01.png           | Bin 0 -> 71086 bytes
 ...cement\346\236\266\346\236\204\345\233\276.png" | Bin 0 -> 47158 bytes
 .../ContextService/linkis-contextservice-01.png    | Bin 0 -> 22692 bytes
 .../ContextService/linkis-contextservice-02.png    | Bin 0 -> 10655 bytes
 .../linkis-contextservice-cache-01.png             | Bin 0 -> 11881 bytes
 .../linkis-contextservice-cache-02.png             | Bin 0 -> 23902 bytes
 .../linkis-contextservice-cache-03.png             | Bin 0 -> 109334 bytes
 .../linkis-contextservice-cache-04.png             | Bin 0 -> 36161 bytes
 .../linkis-contextservice-cache-05.png             | Bin 0 -> 2265 bytes
 .../linkis-contextservice-client-01.png            | Bin 0 -> 54438 bytes
 .../linkis-contextservice-client-02.png            | Bin 0 -> 93036 bytes
 .../linkis-contextservice-client-03.png            | Bin 0 -> 34839 bytes
 .../ContextService/linkis-contextservice-ha-01.png | Bin 0 -> 38439 bytes
 .../ContextService/linkis-contextservice-ha-02.png | Bin 0 -> 21982 bytes
 .../ContextService/linkis-contextservice-ha-03.png | Bin 0 -> 91788 bytes
 .../ContextService/linkis-contextservice-ha-04.png | Bin 0 -> 40733 bytes
 .../linkis-contextservice-listener-01.png          | Bin 0 -> 24414 bytes
 .../linkis-contextservice-listener-02.png          | Bin 0 -> 46152 bytes
 .../linkis-contextservice-listener-03.png          | Bin 0 -> 32597 bytes
 .../linkis-contextservice-persistence-01.png       | Bin 0 -> 198797 bytes
 .../linkis-contextservice-search-01.png            | Bin 0 -> 33731 bytes
 .../linkis-contextservice-search-02.png            | Bin 0 -> 26768 bytes
 .../linkis-contextservice-search-03.png            | Bin 0 -> 33312 bytes
 .../linkis-contextservice-search-04.png            | Bin 0 -> 25192 bytes
 .../linkis-contextservice-search-05.png            | Bin 0 -> 24757 bytes
 .../linkis-contextservice-search-06.png            | Bin 0 -> 29923 bytes
 .../linkis-contextservice-search-07.png            | Bin 0 -> 30013 bytes
 .../linkis-contextservice-service-01.png           | Bin 0 -> 56235 bytes
 .../linkis-contextservice-service-02.png           | Bin 0 -> 73463 bytes
 .../linkis-contextservice-service-03.png           | Bin 0 -> 23477 bytes
 .../linkis-contextservice-service-04.png           | Bin 0 -> 27387 bytes
 .../zh_CN/Images/Architecture/bml-01.png           | Bin 0 -> 78801 bytes
 .../zh_CN/Images/Architecture/bml-02.png           | Bin 0 -> 55227 bytes
 .../zh_CN/Images/Architecture/linkis-client-01.png | Bin 0 -> 88633 bytes
 .../Architecture/linkis-computation-gov-01.png     | Bin 0 -> 89527 bytes
 .../Architecture/linkis-computation-gov-02.png     | Bin 0 -> 179368 bytes
 .../Architecture/linkis-engineConnPlugin-01.png    | Bin 0 -> 21864 bytes
 .../Images/Architecture/linkis-entrance-01.png     | Bin 0 -> 33102 bytes
 .../zh_CN/Images/Architecture/linkis-intro-01.jpg  | Bin 0 -> 341150 bytes
 .../zh_CN/Images/Architecture/linkis-intro-02.jpg  | Bin 0 -> 289769 bytes
 .../Architecture/linkis-microservice-gov-01.png    | Bin 0 -> 89404 bytes
 .../Architecture/linkis-microservice-gov-03.png    | Bin 0 -> 60074 bytes
 .../linkis-computation-orchestrator-01.png         | Bin 0 -> 53527 bytes
 .../linkis-computation-orchestrator-02.png         | Bin 0 -> 77543 bytes
 .../orchestrator/execution/execution.png           | Bin 0 -> 29487 bytes
 .../orchestrator/execution/execution01.png         | Bin 0 -> 55090 bytes
 .../linkis_orchestrator_architecture.png           | Bin 0 -> 51935 bytes
 .../orchestrator/operation/operation_class.png     | Bin 0 -> 36916 bytes
 .../orchestrator/overall/Orchestrator01.png        | Bin 0 -> 38900 bytes
 .../orchestrator/overall/Orchestrator_Logical.png  | Bin 0 -> 46510 bytes
 .../orchestrator/overall/Orchestrator_Physical.png | Bin 0 -> 52228 bytes
 .../orchestrator/overall/Orchestrator_arc.png      | Bin 0 -> 32345 bytes
 .../orchestrator/overall/Orchestrator_ast.png      | Bin 0 -> 24733 bytes
 .../orchestrator/overall/Orchestrator_cache.png    | Bin 0 -> 96643 bytes
 .../orchestrator/overall/Orchestrator_command.png  | Bin 0 -> 29349 bytes
 .../overall/Orchestrator_computation.png           | Bin 0 -> 64070 bytes
 .../orchestrator/overall/Orchestrator_progress.png | Bin 0 -> 92726 bytes
 .../orchestrator/overall/Orchestrator_reheat.png   | Bin 0 -> 82286 bytes
 .../overall/Orchestrator_transication.png          | Bin 0 -> 63174 bytes
 .../orchestrator/overall/orchestrator_entity.png   | Bin 0 -> 29307 bytes
 .../reheater/linkis-orchestrator-reheater-01.png   | Bin 0 -> 22631 bytes
 .../transform/linkis-orchestrator-transform-01.png | Bin 0 -> 21241 bytes
 .../zh_CN/Images/Architecture/rm-01.png            | Bin 0 -> 183082 bytes
 .../zh_CN/Images/Architecture/rm-02.png            | Bin 0 -> 71086 bytes
 .../zh_CN/Images/Architecture/rm-03.png            | Bin 0 -> 52466 bytes
 .../zh_CN/Images/Architecture/rm-04.png            | Bin 0 -> 36324 bytes
 .../zh_CN/Images/Architecture/rm-05.png            | Bin 0 -> 34066 bytes
 .../zh_CN/Images/Architecture/rm-06.png            | Bin 0 -> 44105 bytes
 .../zh_CN/Images/EngineUsage/hive-config.png       | Bin 0 -> 127024 bytes
 .../zh_CN/Images/EngineUsage/hive-run.png          | Bin 0 -> 94294 bytes
 .../zh_CN/Images/EngineUsage/jdbc-conf.png         | Bin 0 -> 128381 bytes
 .../zh_CN/Images/EngineUsage/jdbc-run.png          | Bin 0 -> 56438 bytes
 .../zh_CN/Images/EngineUsage/pyspakr-run.png       | Bin 0 -> 124979 bytes
 .../zh_CN/Images/EngineUsage/python-config.png     | Bin 0 -> 129842 bytes
 .../zh_CN/Images/EngineUsage/python-run.png        | Bin 0 -> 89641 bytes
 .../zh_CN/Images/EngineUsage/queue-set.png         | Bin 0 -> 115340 bytes
 .../zh_CN/Images/EngineUsage/scala-run.png         | Bin 0 -> 125060 bytes
 .../zh_CN/Images/EngineUsage/shell-run.png         | Bin 0 -> 209553 bytes
 .../zh_CN/Images/EngineUsage/spark-conf.png        | Bin 0 -> 178501 bytes
 .../zh_CN/Images/EngineUsage/sparksql-run.png      | Bin 0 -> 121699 bytes
 .../zh_CN/Images/EngineUsage/workflow.png          | Bin 0 -> 151481 bytes
 .../zh_CN/Images/Introduction/introduction.png     | Bin 0 -> 90686 bytes
 .../Images/Tuning_and_Troubleshooting/Q&A.png      | Bin 0 -> 161638 bytes
 .../Tuning_and_Troubleshooting/code-fix-01.png     | Bin 0 -> 199523 bytes
 .../Tuning_and_Troubleshooting/db-config-01.png    | Bin 0 -> 391789 bytes
 .../Tuning_and_Troubleshooting/db-config-02.png    | Bin 0 -> 60334 bytes
 .../Images/Tuning_and_Troubleshooting/debug-01.png | Bin 0 -> 6168 bytes
 .../Images/Tuning_and_Troubleshooting/debug-02.png | Bin 0 -> 62496 bytes
 .../Images/Tuning_and_Troubleshooting/debug-03.png | Bin 0 -> 32875 bytes
 .../Images/Tuning_and_Troubleshooting/debug-04.png | Bin 0 -> 111758 bytes
 .../Images/Tuning_and_Troubleshooting/debug-05.png | Bin 0 -> 52040 bytes
 .../Images/Tuning_and_Troubleshooting/debug-06.png | Bin 0 -> 63668 bytes
 .../Images/Tuning_and_Troubleshooting/debug-07.png | Bin 0 -> 316176 bytes
 .../Images/Tuning_and_Troubleshooting/debug-08.png | Bin 0 -> 27722 bytes
 .../Tuning_and_Troubleshooting/hive-config-01.png  | Bin 0 -> 76327 bytes
 .../linkis-exception-01.png                        | Bin 0 -> 1199628 bytes
 .../linkis-exception-02.png                        | Bin 0 -> 1366293 bytes
 .../linkis-exception-03.png                        | Bin 0 -> 646836 bytes
 .../linkis-exception-04.png                        | Bin 0 -> 2965676 bytes
 .../linkis-exception-05.png                        | Bin 0 -> 454949 bytes
 .../linkis-exception-06.png                        | Bin 0 -> 869492 bytes
 .../linkis-exception-07.png                        | Bin 0 -> 2249882 bytes
 .../linkis-exception-08.png                        | Bin 0 -> 1191728 bytes
 .../linkis-exception-09.png                        | Bin 0 -> 1008341 bytes
 .../linkis-exception-10.png                        | Bin 0 -> 322110 bytes
 .../Tuning_and_Troubleshooting/page-show-01.png    | Bin 0 -> 115010 bytes
 .../Tuning_and_Troubleshooting/page-show-02.png    | Bin 0 -> 576911 bytes
 .../Tuning_and_Troubleshooting/page-show-03.png    | Bin 0 -> 654609 bytes
 .../searching_keywords.png                         | Bin 0 -> 102094 bytes
 .../Tuning_and_Troubleshooting/shell-error-01.png  | Bin 0 -> 74682 bytes
 .../Tuning_and_Troubleshooting/shell-error-02.png  | Bin 0 -> 330735 bytes
 .../Tuning_and_Troubleshooting/shell-error-03.png  | Bin 0 -> 1624375 bytes
 .../Tuning_and_Troubleshooting/shell-error-04.png  | Bin 0 -> 803920 bytes
 .../Tuning_and_Troubleshooting/shell-error-05.png  | Bin 0 -> 179543 bytes
 Linkis-Doc-master/zh_CN/Images/after_linkis_cn.png | Bin 0 -> 645519 bytes
 .../zh_CN/Images/before_linkis_cn.png              | Bin 0 -> 332201 bytes
 .../deployment/Linkis1.0_combined_eureka.png       | Bin 0 -> 134418 bytes
 Linkis-Doc-master/zh_CN/README.md                  |  87 +++++
 Linkis-Doc-master/zh_CN/SUMMARY.md                 |  69 ++++
 .../Tuning_and_Troubleshooting/Configuration.md    | 220 ++++++++++++
 .../zh_CN/Tuning_and_Troubleshooting/Q&A.md        | 257 +++++++++++++
 .../zh_CN/Tuning_and_Troubleshooting/README.md     | 112 ++++++
 .../zh_CN/Tuning_and_Troubleshooting/Tuning.md     |  50 +++
 ...\247\345\210\2601.0\346\214\207\345\215\227.md" |  73 ++++
 .../zh_CN/Upgrade_Documents/README.md              |   6 +
 .../zh_CN/User_Manual/How_To_Use_Linkis.md         |  20 ++
 ...74\225\346\223\216\344\277\241\346\201\257.png" | Bin 0 -> 89529 bytes
 ...56\241\347\220\206\347\225\214\351\235\242.png" | Bin 0 -> 43765 bytes
 ...74\226\350\276\221\347\225\214\351\235\242.png" | Bin 0 -> 64470 bytes
 ...63\250\345\206\214\344\270\255\345\277\203.png" | Bin 0 -> 327966 bytes
 ...37\245\350\257\242\346\214\211\351\222\256.png" | Bin 0 -> 81788 bytes
 ...16\206\345\217\262\347\225\214\351\235\242.png" | Bin 0 -> 82340 bytes
 ...17\230\351\207\217\347\225\214\351\235\242.png" | Bin 0 -> 40073 bytes
 ...11\247\350\241\214\346\227\245\345\277\227.png" | Bin 0 -> 114314 bytes
 ...05\215\347\275\256\347\225\214\351\235\242.png" | Bin 0 -> 79698 bytes
 ...56\241\347\220\206\347\225\214\351\235\242.png" | Bin 0 -> 39198 bytes
 ...72\224\347\224\250\347\261\273\345\236\213.png" | Bin 0 -> 108864 bytes
 ...74\225\346\223\216\344\277\241\346\201\257.png" | Bin 0 -> 41814 bytes
 ...20\206\345\221\230\350\247\206\345\233\276.png" | Bin 0 -> 80087 bytes
 ...74\226\350\276\221\347\233\256\345\275\225.png" | Bin 0 -> 89919 bytes
 ...56\241\347\220\206\347\225\214\351\235\242.png" | Bin 0 -> 49277 bytes
 ...275\277\347\224\250\346\226\207\346\241\243.md" | 193 ++++++++++
 ...275\277\347\224\250\346\226\207\346\241\243.md" | 389 ++++++++++++++++++++
 .../User_Manual/Linkis_Console_User_Manual.md      | 120 +++++++
 Linkis-Doc-master/zh_CN/User_Manual/README.md      |   8 +
 src/App.vue                                        |   4 +-
 src/assets/image/incubator-logo.png                | Bin 0 -> 17961 bytes
 src/docs/deploy/distributed_en.md                  |  99 ++++-
 src/docs/deploy/distributed_zh.md                  | 101 +++++-
 src/docs/deploy/engins_en.md                       |  83 ++++-
 src/docs/deploy/engins_zh.md                       | 107 +++++-
 src/docs/deploy/linkis_en.md                       | 247 ++++++++++++-
 src/docs/deploy/structure_en.md                    | 199 +++++++++-
 src/docs/deploy/structure_zh.md                    | 187 +++++++++-
 src/docs/manual/CliManual_en.md                    | 191 ++++++++++
 src/docs/manual/CliManual_zh.md                    | 193 ++++++++++
 src/docs/manual/ConsoleUserManual_en.md            | 120 +++++++
 src/docs/manual/ConsoleUserManual_zh.md            | 120 +++++++
 src/docs/manual/HowToUse_en.md                     |  29 ++
 src/docs/manual/HowToUse_zh.md                     |  20 ++
 src/docs/manual/UserManual_en.md                   | 400 +++++++++++++++++++++
 src/docs/manual/UserManual_zh.md                   | 389 ++++++++++++++++++++
 src/pages/docs/index.vue                           | 121 ++++---
 src/pages/docs/manual/CliManual.vue                |  13 +
 src/pages/docs/manual/ConsoleUserManual.vue        |  13 +
 src/pages/docs/manual/HowToUse.vue                 |  13 +
 src/pages/docs/manual/UserManual.vue               |  13 +
 src/pages/faq.vue                                  |   4 -
 src/pages/faq/faq_en.md                            | 255 +++++++++++++
 src/pages/faq/faq_zh.md                            | 257 +++++++++++++
 src/pages/faq/index.vue                            |  46 +++
 src/router.js                                      |  31 +-
 498 files changed, 15723 insertions(+), 63 deletions(-)

diff --git a/Linkis-Doc-master/LANGS.md b/Linkis-Doc-master/LANGS.md
new file mode 100644
index 0000000..5f72105
--- /dev/null
+++ b/Linkis-Doc-master/LANGS.md
@@ -0,0 +1,2 @@
+* [English](en_US)
+* [中文](zh_CN)
\ No newline at end of file
diff --git a/Linkis-Doc-master/README.md b/Linkis-Doc-master/README.md
new file mode 100644
index 0000000..bc802e0
--- /dev/null
+++ b/Linkis-Doc-master/README.md
@@ -0,0 +1,114 @@
+Linkis
+==========
+
+[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
+
+[English](README.md) | [中文](README_CN.md)
+
+# Introduction
+
+ Linkis builds a layer of computation middleware between upper applications and underlying engines. By using standard interfaces such as REST/WS/JDBC provided by Linkis, the upper applications can easily access the underlying engines such as MySQL/Spark/Hive/Presto/Flink, etc., and achieve the intercommunication of user resources like unified variables, scripts, UDFs, functions and resource files at the same time.
+
+As a computation middleware, Linkis provides powerful connectivity, reuse, orchestration, expansion, and governance capabilities. By decoupling the application layer and the engine layer, it simplifies the complex network call relationship, and thus reduces the overall complexity and saves the development and maintenance costs as well.
+
+Since the first release of Linkis in 2019, it has accumulated more than **700** trial companies and **1000+** sandbox trial users, which involving diverse industries, from finance, banking, tele-communication, to manufactory, internet companies and so on. Lots of companies have already used Linkis as a unified entrance for the underlying computation and storage engines of the big data platform.
+
+
+![linkis-intro-01](https://user-images.githubusercontent.com/11496700/84615498-c3030200-aefb-11ea-9b16-7e4058bf6026.png)
+
+![linkis-intro-03](https://user-images.githubusercontent.com/11496700/84615483-bb435d80-aefb-11ea-81b5-67f62b156628.png)
+
+# Features
+
+- **Support for diverse underlying computation storage engines**.  
+    Currently supported computation/storage engines: Spark, Hive, Python, Presto, ElasticSearch, MLSQL, TiSpark, JDBC, Shell, etc;      
+    Computation/storage engines to be supported: Flink, Impala, etc;      
+    Supported scripting languages: SparkSQL, HiveQL, Python, Shell, Pyspark, R, Scala and JDBC, etc.  
+  
+- **Powerful task/request governance capabilities**. With services such as Orchestrator, Label Manager and customized Spring Cloud Gateway, Linkis is able to provide multi-level labels based, cross-cluster/cross-IDC fine-grained routing, load balance, multi-tenancy, traffic control, resource control, and orchestration strategies like dual-active, active-standby, etc.  
+
+- **Support full stack computation/storage engine**. As a computation middleware, it will receive, execute and manage tasks and requests for various computation storage engines, including batch tasks, interactive query tasks, real-time streaming tasks and storage tasks;
+
+- **Resource management capabilities**.  ResourceManager is not only capable of managing resources for Yarn and Linkis EngineManger as in Linkis 0.X, but also able to provide label-based multi-level resource allocation and recycling, allowing itself to have powerful resource management capabilities across mutiple Yarn clusters and mutiple computation resource types;
+
+- **Unified Context Service**. Generate Context ID for each task/request,  associate and manage user and system resource files (JAR, ZIP, Properties, etc.), result set, parameter variable, function, etc., across user, system, and computing engine. Set in one place, automatic reference everywhere;
+
+- **Unified materials**. System and user-level unified material management, which can be shared and transferred across users and systems.
+
+# Supported engine types
+
+| **Engine** | **Supported Version** | **Linkis 0.X version requirement**| **Linkis 1.X version requirement** | **Description** |
+|:---- |:---- |:---- |:---- |:---- |
+|Flink |1.11.0|\>=dev-0.12.0, PR #703 not merged yet.|ongoing|	Flink EngineConn. Supports FlinkSQL code, and also supports Flink Jar to Linkis Manager to start a new Yarn application.|
+|Impala|\>=3.2.0, CDH >=6.3.0"|\>=dev-0.12.0, PR #703 not merged yet.|ongoing|Impala EngineConn. Supports Impala SQL.|
+|Presto|\>= 0.180|\>=0.11.0|ongoing|Presto EngineConn. Supports Presto SQL.|
+|ElasticSearch|\>=6.0|\>=0.11.0|ongoing|ElasticSearch EngineConn. Supports SQL and DSL code.|
+|Shell|Bash >=2.0|\>=0.9.3|\>=1.0.0_rc1|Shell EngineConn. Supports shell code.|
+|MLSQL|\>=1.1.0|\>=0.9.1|ongoing|MLSQL EngineConn. Supports MLSQL code.|
+|JDBC|MySQL >=5.0, Hive >=1.2.1|\>=0.9.0|\>=1.0.0_rc1|JDBC EngineConn. Supports MySQL and HiveQL code.|
+|Spark|Apache 2.0.0~2.4.7, CDH >=5.4.0|\>=0.5.0|\>=1.0.0_rc1|Spark EngineConn. Supports SQL, Scala, Pyspark and R code.|
+|Hive|Apache >=1.0.0, CDH >=5.4.0|\>=0.5.0|\>=1.0.0_rc1|Hive EngineConn. Supports HiveQL code.|
+|Hadoop|Apache >=2.6.0, CDH >=5.4.0|\>=0.5.0|ongoing|Hadoop EngineConn. Supports Hadoop MR/YARN application.|
+|Python|\>=2.6|\>=0.5.0|\>=1.0.0_rc1|Python EngineConn. Supports python code.|
+|TiSpark|1.1|\>=0.5.0|ongoing|TiSpark EngineConn. Support querying TiDB data by SparkSQL.|
+
+# Download
+
+Please go to the [Linkis releases page](https://github.com/WeBankFinTech/Linkis/wiki/Linkis-Releases) to download a compiled distribution or a source code package of Linkis.
+
+# Compile and deploy
+Please follow [Compile Guide](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Development_Documents/Linkis%E7%BC%96%E8%AF%91%E6%96%87%E6%A1%A3.md) to compile Linkis from source code.  
+Please refer to [Deployment_Documents](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Deployment_Documents) to do the deployment. 
+
+# Examples and Guidance
+You can find examples and guidance for how to use and manage Linkis in [User_Manual](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/User_Manual), [Engine_Usage_Documents](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Engine_Usage_Documentations) and [API_Documents](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/API_Documentations).
+
+# Documentation
+
+The documentation of linkis is in [Linkis-Doc](https://github.com/WeBankFinTech/Linkis-Doc) or in the [wiki](https://github.com/WeBankFinTech/Linkis/wiki).
+
+# Architecture
+Linkis services could be divided into three categories: computation governance services, public enhancement services and microservice governance services.  
+- The computation governance services, support the 3 major stages of processing a task/request: submission -> preparation -> execution;  
+- The public enhancement services, including the material library service, context service, and data source service;  
+- The microservice governance services, including Spring Cloud Gateway, Eureka and Open Feign.
+
+Below is the Linkis architecture diagram. You can find more detailed architecture docs in [Linkis-Doc/Architecture](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Architecture_Documents).
+![architecture](en_US/Images/Linkis_1.0_architecture.png)
+
+Based on Linkis the computation middleware, we've built a lot of applications and tools on top of it in the big data platform suite [WeDataSphere](https://github.com/WeBankFinTech/WeDataSphere). Below are the currently available open-source projects.
+
+![wedatasphere_stack_Linkis](en_US/Images/wedatasphere_stack_Linkis.png)
+
+- [**DataSphere Studio** - Data Application Integration& Development Framework](https://github.com/WeBankFinTech/DataSphereStudio)
+
+- [**Scriptis** - Data Development IDE Tool](https://github.com/WeBankFinTech/Scriptis)
+
+- [**Visualis** - Data Visualization Tool](https://github.com/WeBankFinTech/Visualis)
+
+- [**Schedulis** - Workflow Task Scheduling Tool](https://github.com/WeBankFinTech/Schedulis)
+
+- [**Qualitis** - Data Quality Tool](https://github.com/WeBankFinTech/Qualitis)
+
+- [**MLLabis** - Machine Learning Notebook IDE](https://github.com/WeBankFinTech/prophecis)
+
+More projects upcoming, please stay tuned.
+
+# Contributing
+
+Contributions are always welcomed, we need more contributors to build Linkis together. either code, or doc, or other supports that could help the community.  
+For code and documentation contributions, please follow the [contribution guide](https://github.com/WeBankFinTech/Linkis/blob/master/Contributing_CN.md).
+
+# Contact Us
+
+Any questions or suggestions please kindly submit an issue.  
+You can scan the QR code below to join our WeChat and QQ group to get more immediate response.
+
+![introduction05](en_US/Images/wedatasphere_contact_01.png)
+
+Meetup videos on [Bilibili](https://space.bilibili.com/598542776?from=search&seid=14344213924133040656).
+
+# Who is Using Linkis
+
+We opened [an issue](https://github.com/WeBankFinTech/Linkis/issues/23) for users to feedback and record who is using Linkis.  
+Since the first release of Linkis in 2019, it has accumulated more than **700** trial companies and **1000+** sandbox trial users, which involving diverse industries, from finance, banking, tele-communication, to manufactory, internet companies and so on.
\ No newline at end of file
diff --git a/Linkis-Doc-master/README_CN.md b/Linkis-Doc-master/README_CN.md
new file mode 100644
index 0000000..e926d6e
--- /dev/null
+++ b/Linkis-Doc-master/README_CN.md
@@ -0,0 +1,105 @@
+Linkis
+============
+
+[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
+
+[English](README.md) | [中文](README_CN.md)
+
+# 介绍
+
+Linkis 在上层应用程序和底层引擎之间构建了一层计算中间件。通过使用Linkis 提供的REST/WebSocket/JDBC 等标准接口,上层应用可以方便地连接访问MySQL/Spark/Hive/Presto/Flink 等底层引擎,同时实现变量、脚本、函数和资源文件等用户资源的跨上层应用互通。  
+作为计算中间件,Linkis 提供了强大的连通、复用、编排、扩展和治理管控能力。通过计算中间件将应用层和引擎层解耦,简化了复杂的网络调用关系,降低了整体复杂度,同时节约了整体开发和维护成本。  
+Linkis 自2019年开源发布以来,已累计积累了700多家试验企业和1000+沙盒试验用户,涉及金融、电信、制造、互联网等多个行业。许多公司已经将Linkis 作为大数据平台底层计算存储引擎的统一入口,和计算请求/任务的治理管控利器。
+
+![没有Linkis 之前](zh_CN/Images/before_linkis_cn.png)
+
+![有了Linkis 之后](zh_CN/Images/after_linkis_cn.png)
+
+# 核心特点
+
+- **丰富的底层计算存储引擎支持**。  
+    **目前支持的计算存储引擎**:Spark、Hive、Python、Presto、ElasticSearch、MLSQL、TiSpark、JDBC和Shell等。  
+    **正在支持中的计算存储引擎**:Flink、Impala等。  
+    **支持的脚本语言**:SparkSQL, HiveQL, Python, Shell, Pyspark, R, Scala 和JDBC 等。    
+- **强大的计算治理能力**。基于Orchestrator、Label Manager和定制的Spring Cloud Gateway等服务,Linkis能够提供基于多级标签的跨集群/跨IDC 细粒度路由、负载均衡、多租户、流量控制、资源控制和编排策略(如双活、主备等)支持能力。  
+- **全栈计算存储引擎架构支持**。能够接收、执行和管理针对各种计算存储引擎的任务和请求,包括离线批量任务、交互式查询任务、实时流式任务和存储型任务;
+- **资源管理能力**。 ResourceManager 不仅具备 Linkis0.X 对 Yarn 和 Linkis EngineManager 的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让 ResourceManager 具备跨集群、跨计算资源类型的强大资源管理能力。
+- **统一上下文服务**。为每个计算任务生成context id,跨用户、系统、计算引擎的关联管理用户和系统资源文件(JAR、ZIP、Properties等),结果集,参数变量,函数等,一处设置,处处自动引用;
+- **统一物料**。系统和用户级物料管理,可分享和流转,跨用户、系统共享物料。
+
+# 支持的引擎类型
+
+| **引擎** | **引擎版本** | **Linkis 0.X 版本要求**| **Linkis 1.X 版本要求** | **说明** |
+|:---- |:---- |:---- |:---- |:---- |
+|Flink |1.11.0|\>=dev-0.12.0, PR #703 尚未合并|ongoing|	Flink EngineConn。支持FlinkSQL 代码,也支持以Flink Jar 形式启动一个新的Yarn 应用程序。|
+|Impala|\>=3.2.0, CDH >=6.3.0"|\>=dev-0.12.0, PR #703 尚未合并|ongoing|Impala EngineConn. 支持Impala SQL 代码.|
+|Presto|\>= 0.180|\>=0.11.0|ongoing|Presto EngineConn. 支持Presto SQL 代码.|
+|ElasticSearch|\>=6.0|\>=0.11.0|ongoing|ElasticSearch EngineConn. 支持SQL 和DSL 代码.|
+|Shell|Bash >=2.0|\>=0.9.3|\>=1.0.0_rc1|Shell EngineConn. 支持Bash shell 代码.|
+|MLSQL|\>=1.1.0|\>=0.9.1|ongoing|MLSQL EngineConn. 支持MLSQL 代码.|
+|JDBC|MySQL >=5.0, Hive >=1.2.1|\>=0.9.0|\>=1.0.0_rc1|JDBC EngineConn. 已支持MySQL 和HiveQL,可快速扩展支持其他有JDBC Driver 包的引擎, 如Oracle.
+|Spark|Apache 2.0.0~2.4.7, CDH >=5.4.0|\>=0.5.0|\>=1.0.0_rc1|Spark EngineConn. 支持SQL, Scala, Pyspark 和R 代码.|
+|Hive|Apache >=1.0.0, CDH >=5.4.0|\>=0.5.0|\>=1.0.0_rc1|Hive EngineConn. 支持HiveQL 代码.|
+|Hadoop|Apache >=2.6.0, CDH >=5.4.0|\>=0.5.0|ongoing|Hadoop EngineConn. 支持Hadoop MR/YARN application.|
+|Python|\>=2.6|\>=0.5.0|\>=1.0.0_rc1|Python EngineConn. 支持python 代码.|
+|TiSpark|1.1|\>=0.5.0|ongoing|TiSpark EngineConn. 支持用SparkSQL 查询TiDB.|
+
+# 下载
+
+请前往[Linkis releases 页面](https://github.com/WeBankFinTech/Linkis/wiki/Linkis-Releases) 下载Linkis 的已编译版本或源码包。
+
+# 编译和安装部署
+请参照[编译指引](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Development_Documents/Linkis%E7%BC%96%E8%AF%91%E6%96%87%E6%A1%A3.md) 来编译Linkis 源码。  
+请参考[安装部署文档](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Deployment_Documents) 来部署Linkis。
+
+# 示例和使用指引
+请到 [用户手册](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/User_Manual), [各引擎使用指引](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Engine_Usage_Documentations) 和[API 文档](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/API_Documentations) 中,查看如何使用和管理Linkis 的示例和指引。
+
+# 文档
+
+完整的Linkis 文档参见[Linkis-Doc](https://github.com/WeBankFinTech/Linkis-Doc) 或[wiki](https://github.com/WeBankFinTech/Linkis/wiki).  
+
+# 架构概要
+Linkis 基于微服务架构开发,其服务可以分为3类:计算治理服务、公共增强服务和微服务治理服务。  
+- 计算治理服务,支持计算任务/请求处理流程的3个主要阶段:提交->准备->执行;
+- 公共增强服务,包括上下文服务、物料管理服务及数据源服务等;
+- 微服务治理服务,包括定制化的Spring Cloud Gateway、Eureka、Open Feign。
+
+下面是Linkis 的架构概要图. 更多详细架构文档请见 [Linkis-Doc/Architecture](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Architecture_Documents).
+![architecture](en_US/Images/Linkis_1.0_architecture.png)
+
+基于Linkis 计算中间件,我们在大数据平台套件[WeDataSphere](https://github.com/WeBankFinTech/WeDataSphere) 中构建了许多应用和工具系统。下面是目前可用的开源项目。
+
+![wedatasphere_stack_Linkis](en_US/Images/wedatasphere_stack_Linkis.png)
+
+- [**DataSphere Studio** - 数据应用集成开发框架](https://github.com/WeBankFinTech/DataSphereStudio)
+
+- [**Scriptis** - 数据研发IDE工具](https://github.com/WeBankFinTech/Scriptis)
+
+- [**Visualis** - 数据可视化工具](https://github.com/WeBankFinTech/Visualis)
+
+- [**Schedulis** - 工作流调度工具](https://github.com/WeBankFinTech/Schedulis)
+
+- [**Qualitis** - 数据质量工具](https://github.com/WeBankFinTech/Qualitis)
+
+- [**MLLabis** - 容器化机器学习notebook 开发环境](https://github.com/WeBankFinTech/prophecis)
+
+更多项目开源准备中,敬请期待。
+
+# 贡献
+
+我们非常欢迎和期待更多的贡献者参与共建Linkis, 不论是代码、文档,或是其他能够帮助到社区的贡献形式。  
+代码和文档相关的贡献请参照[贡献指引](https://github.com/WeBankFinTech/Linkis/blob/master/Contributing_CN.md).
+
+# 联系我们
+
+对Linkis 的任何问题和建议,敬请提交issue,以便跟踪处理和经验沉淀共享。  
+您也可以扫描下面的二维码,加入我们的微信/QQ群,以获得更快速的响应。
+![introduction05](en_US/Images/wedatasphere_contact_01.png)
+
+Meetup 视频 [Bilibili](https://space.bilibili.com/598542776?from=search&seid=14344213924133040656).
+
+# 谁在使用Linkis
+
+我们创建了[一个 issue](https://github.com/WeBankFinTech/Linkis/issues/23) 以便用户反馈和记录谁在使用Linkis.  
+Linkis 自2019年开源发布以来,累计已有700多家试验企业和1000+沙盒试验用户,涉及金融、电信、制造、互联网等多个行业。
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/API_Documentations/JDBC_API_Document.md b/Linkis-Doc-master/en_US/API_Documentations/JDBC_API_Document.md
new file mode 100644
index 0000000..72b3f3a
--- /dev/null
+++ b/Linkis-Doc-master/en_US/API_Documentations/JDBC_API_Document.md
@@ -0,0 +1,45 @@
+# Task Submission And Execution Of JDBC API Documents
+### 1. Introduce Dependent Modules
+The first way depends on the JDBC module in the pom:  
+```xml
+<dependency>
+    <groupId>com.webank.wedatasphere.linkis</groupId>
+    <artifactId>linkis-ujes-jdbc</artifactId>
+    <version>${linkis.version}</version>
+ </dependency>
+```  
+**Note:** The module has not been deployed to the central warehouse. You need to execute `mvn install -Dmaven.test.skip=true` in the ujes/jdbc directory for local installation.
+
+**The second way is through packaging and compilation:**
+1. Enter the ujes/jdbc directory in the Linkis project and enter the command in the terminal to package `mvn assembly:assembly -Dmaven.test.skip=true`
+The packaging instruction skips the running of the unit test and the compilation of the test code, and packages the dependencies required by the JDBC module into the Jar package.  
+2. After the packaging is complete, two Jar packages will be generated in the target directory of JDBC. The one with dependencies in the Jar package name is the Jar package we need.  
+### Second, create a test category:
+Establish a Java test class LinkisClientImplTestJ, the specific interface meaning can be seen in the notes:  
+```java
+ public static void main(String[] args) throws SQLException, ClassNotFoundException {
+
+        //1. Load driver class:com.webank.wedatasphere.linkis.ujes.jdbc.UJESSQLDriver
+        Class.forName("com.webank.wedatasphere.linkis.ujes.jdbc.UJESSQLDriver");
+
+        //2. Get connection:jdbc:linkis://gatewayIP:gatewayPort
+        //   the front-end account password
+        Connection connection =  DriverManager.getConnection("jdbc:linkis://127.0.0.1:9001","username","password");
+
+        //3. Create statement and execute query
+        Statement st= connection.createStatement();
+        ResultSet rs=st.executeQuery("show tables");
+        //4. Processing the returned results of the database (using the ResultSet class)
+        while (rs.next()) {
+            ResultSetMetaData metaData = rs.getMetaData();
+            for (int i = 1; i <= metaData.getColumnCount(); i++) {
+                System.out.print(metaData.getColumnName(i) + ":" +metaData.getColumnTypeName(i)+": "+ rs.getObject(i) + "    ");
+            }
+            System.out.println();
+        }
+        // close resourse
+        rs.close();
+        st.close();
+        connection.close();
+    }
+```
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/API_Documentations/Linkis_task_submission_and_execution_RestAPI_document.md b/Linkis-Doc-master/en_US/API_Documentations/Linkis_task_submission_and_execution_RestAPI_document.md
new file mode 100644
index 0000000..a7fb568
--- /dev/null
+++ b/Linkis-Doc-master/en_US/API_Documentations/Linkis_task_submission_and_execution_RestAPI_document.md
@@ -0,0 +1,170 @@
+# Linkis Task submission and execution Rest API document
+
+- The return of the Linkis Restful interface follows the following standard return format:
+
+```json
+{
+ "method": "",
+ "status": 0,
+ "message": "",
+ "data": {}
+}
+```
+
+**Convention**:
+
+ - method: Returns the requested Restful API URI, which is mainly used in WebSocket mode.
+ - status: return status information, where: -1 means no login, 0 means success, 1 means error, 2 means verification failed, 3 means no access to the interface.
+ - data: return specific data.
+ - message: return the requested prompt message. If the status is not 0, the message returned is an error message, and the data may have a stack field, which returns specific stack information.
+ 
+For more information about the Linkis Restful interface specification, please refer to: [Linkis Restful Interface Specification](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Development_Documents/Development_Specification/API.md)
+
+### 1). Submit for execution
+
+- Interface `/api/rest_j/v1/entrance/execute`
+
+- Submission method `POST`
+
+```json
+{
+    "executeApplicationName": "hive", //Engine type
+    "requestApplicationName": "dss", //Client service type
+    "executionCode": "show tables",
+    "params": {"variable": {}, "configuration": {}},
+    "runType": "hql", //The type of script to run
+    "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
+}
+```
+
+- Interface `/api/rest_j/v1/entrance/submit`
+
+- Submission method `POST`
+
+```json
+{
+    "executionContent": {"code": "show tables", "runType": "sql"},
+    "params": {"variable": {}, "configuration": {}},
+    "source": {"scriptPath": "file:///mnt/bdp/hadoop/1.hql"},
+    "labels": {
+        "engineType": "spark-2.4.3",
+        "userCreator": "hadoop-IDE"
+    }
+}
+```
+
+
+-Return to example
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/execute",
+ "status": 0,
+ "message": "Request executed successfully",
+ "data": {
+   "execID": "030418IDEhivebdpdwc010004:10087IDE_hadoop_21",
+   "taskID": "123"
+ }
+}
+```
+
+- execID is the unique identification execution ID generated for the task after the user task is submitted to Linkis. It is of type String. This ID is only useful when the task is running, similar to the concept of PID. The design of ExecID is `(requestApplicationName length)(executeAppName length)(Instance length)${requestApplicationName}${executeApplicationName}${entranceInstance information ip+port}${requestApplicationName}_${umUser}_${index}`
+
+- taskID is the unique ID that represents the task submitted by the user. This ID is generated by the database self-increment and is of Long type
+
+
+### 2).Get status
+
+- Interface `/api/rest_j/v1/entrance/${execID}/status`
+
+- Submission method `GET`
+
+- Return to example
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/{execID}/status",
+ "status": 0,
+ "message": "Get status successful",
+ "data": {
+   "execID": "${execID}",
+   "status": "Running"
+ }
+}
+```
+
+### 3).Get logs
+
+- Interface `/api/rest_j/v1/entrance/${execID}/log?fromLine=${fromLine}&size=${size}`
+
+- Submission method `GET`
+
+- The request parameter fromLine refers to the number of lines from which to get, and size refers to the number of lines of logs that this request gets
+
+- Return example, where the returned fromLine needs to be used as a parameter for the next request of this interface
+
+```json
+{
+  "method": "/api/rest_j/v1/entrance/${execID}/log",
+  "status": 0,
+  "message": "Return log information",
+  "data": {
+    "execID": "${execID}",
+  "log": ["error log","warn log","info log", "all log"],
+  "fromLine": 56
+  }
+}
+```
+
+### 4). Get progress
+
+- Interface `/api/rest_j/v1/entrance/${execID}/progress`
+
+- Submission method `GET`<br>
+
+- Return to example
+
+```json
+{
+  "method": "/api/rest_j/v1/entrance/{execID}/progress",
+  "status": 0,
+  "message": "Return progress information",
+  "data": {
+    "execID": "${execID}",
+    "progress": 0.2,
+    "progressInfo": [
+        {
+        "id": "job-1",
+        "succeedTasks": 2,
+        "failedTasks": 0,
+        "runningTasks": 5,
+        "totalTasks": 10
+        },
+        {
+        "id": "job-2",
+        "succeedTasks": 5,
+        "failedTasks": 0,
+        "runningTasks": 5,
+        "totalTasks": 10
+        }
+    ]
+  }
+}
+```
+
+### 5).kill task
+
+- Interface `/api/rest_j/v1/entrance/${execID}/kill`
+
+- Submission method `POST`
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/{execID}/kill",
+ "status": 0,
+ "message": "OK",
+ "data": {
+   "execID":"${execID}"
+  }
+}
+```
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/API_Documentations/Login_API.md b/Linkis-Doc-master/en_US/API_Documentations/Login_API.md
new file mode 100644
index 0000000..be7e504
--- /dev/null
+++ b/Linkis-Doc-master/en_US/API_Documentations/Login_API.md
@@ -0,0 +1,125 @@
+# Login Document
+## 1. Docking With LDAP Service
+
+Enter the /conf/linkis-spring-cloud-services/linkis-mg-gateway directory and execute the command:  
+```bash
+    vim linkis-server.properties
+```    
+
+Add LDAP related configuration:  
+```bash
+wds.linkis.ldap.proxy.url=ldap://127.0.0.1:389/ #LDAP service URL
+wds.linkis.ldap.proxy.baseDN=dc=webank,dc=com #Configuration of LDAP service    
+```    
+
+## 2. How To Open The Test Mode To Achieve Login-Free
+
+Enter the /conf/linkis-spring-cloud-services/linkis-mg-gateway directory and execute the command:
+```bash
+    vim linkis-server.properties
+```
+    
+    
+Turn on the test mode and the parameters are as follows:
+```bash
+    wds.linkis.test.mode=true   # Open test mode
+    wds.linkis.test.user=hadoop  # Specify which user to delegate all requests to in test mode
+```
+
+## 3.Log In Interface Summary
+We provide the following login-related interfaces:
+ - [Login In](#1LoginIn)
+
+ - [Login Out](#2LoginOut)
+
+ - [Heart Beat](#3HeartBeat)
+ 
+
+## 4. Interface details
+
+- The return of the Linkis Restful interface follows the following standard return format:
+
+```json
+{
+ "method": "",
+ "status": 0,
+ "message": "",
+ "data": {}
+}
+```
+
+**Protocol**:
+
+- method: Returns the requested Restful API URI, which is mainly used in WebSocket mode.
+- status: returns status information, where: -1 means no login, 0 means success, 1 means error, 2 means verification failed, 3 means no access to the interface.
+- data: return specific data.
+- message: return the requested prompt message. If the status is not 0, the message returns an error message, and the data may have a stack field, which returns specific stack information.
+ 
+For more information about the Linkis Restful interface specification, please refer to: [Linkis Restful Interface Specification](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/en_US/Development_Documents/Development_Specification/API.md)
+
+### 1). Login In
+
+- Interface `/api/rest_j/v1/user/login`
+
+- Submission method `POST`
+
+```json
+      {
+        "userName": "",
+        "password": ""
+      }
+```
+
+- Return to example
+
+```json
+    {
+        "method": null,
+        "status": 0,
+        "message": "login successful(登录成功)!",
+        "data": {
+            "isAdmin": false,
+            "userName": ""
+        }
+     }
+```
+
+Among them:
+
+-isAdmin: Linkis only has admin users and non-admin users. The only privilege of admin users is to support viewing the historical tasks of all users in the Linkis management console.
+
+### 2). Login Out
+
+- Interface `/api/rest_j/v1/user/logout`
+
+- Submission method `POST`
+
+  No parameters
+
+- Return to example
+
+```json
+    {
+        "method": "/api/rest_j/v1/user/logout",
+        "status": 0,
+        "message": "退出登录成功!"
+    }
+```
+
+### 3). Heart Beat
+
+- Interface `/api/rest_j/v1/user/heartbeat`
+
+- Submission method `POST`
+
+  No parameters
+
+- Return to example
+
+```json
+    {
+         "method": "/api/rest_j/v1/user/heartbeat",
+         "status": 0,
+         "message": "维系心跳成功!"
+    }
+```
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/API_Documentations/README.md b/Linkis-Doc-master/en_US/API_Documentations/README.md
new file mode 100644
index 0000000..387b794
--- /dev/null
+++ b/Linkis-Doc-master/en_US/API_Documentations/README.md
@@ -0,0 +1,8 @@
+## 1. Document description
+Linkis1.0 has been refactored and optimized on the basis of Linkix0.x, and it is also compatible with the 0.x interface. However, in order to prevent compatibility problems when using version 1.0, you need to read the following documents carefully:
+
+1. When using Linkis1.0 for customized development, you need to use Linkis's authorization authentication interface. Please read [Login API Document](Login_API.md) carefully.
+
+2. Linkis1.0 provides a JDBC interface. You need to use JDBC to access Linkis. Please read [Task Submit and Execute JDBC API Document](JDBC_API.md).
+
+3. Linkis1.0 provides the Rest interface. If you need to develop upper-level applications on the basis of Linkis, please read [Task Submit and Execute Rest API Document](Linkis_task_submission_and_execution_RestAPI_document.md).
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md
new file mode 100644
index 0000000..d600a5f
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md
@@ -0,0 +1,99 @@
+EngineConn architecture design
+==================
+
+EngineConn: Engine connector, a module that provides functions such as unified configuration management, context service, physical library, data source management, micro service management, and historical task query for other micro service modules.
+
+EngineConn architecture diagram
+
+![EngineConn](../../../Images/Architecture/EngineConn/engineconn-01.png)
+
+Introduction to the second-level module:
+==============
+
+linkis-computation-engineconn interactive engine connector
+---------------------------------------------
+
+The ability to provide interactive computing tasks.
+
+| Core class               | Core function                                                   |
+|----------------------|------------------------------------------------------------|
+| EngineConnTask       | Defines the interactive computing tasks submitted to EngineConn                     |
+| ComputationExecutor  | Defined interactive Executor, with interactive capabilities such as status query and task kill. |
+| TaskExecutionService | Provides management functions for interactive computing tasks                             |
+
+linkis-engineconn-common engine connector common module
+--------------------------------------------
+
+Define the most basic entity classes and interfaces in the engine connector. EngineConn is used to create a connection session Session for the underlying computing storage engine, which contains the session information between the engine and the specific cluster, and is the client that communicates with the specific engine.
+
+| Core Service           | Core function                                                             |
+|-----------------------|----------------------------------------------------------------------|
+| EngineCreationContext | Contains the context information of EngineConn during startup                               |
+| EngineConn            | Contains the specific information of EngineConn, such as type, specific connection information with layer computing storage engine, etc. |
+| EngineExecution       | Provide Executor creation logic                                               |
+| EngineConnHook        | Define the operations before and after each phase of engine startup                                       |
+
+The core logic of linkis-engineconn-core engine connector
+------------------------------------------
+
+Defines the interfaces involved in the core logic of EngineConn.
+
+| Core class            | Core function                           |
+|-------------------|------------------------------------|
+| EngineConnManager | Provide related interfaces for creating and obtaining EngineConn |
+| ExecutorManager   | Provide related interfaces for creating and obtaining Executor   |
+| ShutdownHook      | Define the operation of the engine shutdown phase             |
+
+linkis-engineconn-launch engine connector startup module
+------------------------------------------
+
+Defines the logic of how to start EngineConn.
+
+| Core class           | core function                 |
+|------------------|--------------------------|
+| EngineConnServer | EngineConn microservice startup class |
+
+The core logic of the linkis-executor-core executor
+------------------------------------
+
+>   Defines the core classes related to the actuator. The executor is a real computing scene executor, responsible for submitting user code to EngineConn.
+
+| Core class                 | Core function                                                   |
+|----------------------------|------------------------------------------------------------|
+| Executor | It is the actual computational logic execution unit and provides a top-level abstraction of the various capabilities of the engine. |
+| EngineConnAsyncEvent | Defines EngineConn-related asynchronous events |
+| EngineConnSyncEvent | Defines EngineConn-related synchronization events |
+| EngineConnAsyncListener | Defines EngineConn related asynchronous event listener |
+| EngineConnSyncListener | Defines EngineConn related synchronization event listener |
+| EngineConnAsyncListenerBus | Defines the listener bus for EngineConn asynchronous events |
+| EngineConnSyncListenerBus | Defines the listener bus for EngineConn synchronization events |
+| ExecutorListenerBusContext | Defines the context of the EngineConn event listener |
+| LabelService | Provide label reporting function |
+| ManagerService | Provides the function of information transfer with LinkisManager |
+
+linkis-callback-service callback logic
+-------------------------------
+
+| Core Class         | Core Function |
+|--------------------|--------------------------|
+| EngineConnCallback | Define EngineConn's callback logic |
+
+linkis-accessible-executor can be accessed executor
+--------------------------------------------
+
+Executor that can be accessed. You can interact with it through RPC requests to get its status, load, concurrency and other basic indicators Metrics data.
+
+
+| Core Class               | Core Function                                   |
+|--------------------------|-------------------------------------------------|
+| LogCache | Provide log cache function |
+| AccessibleExecutor | The Executor that can be accessed can interact with it through RPC requests. |
+| NodeHealthyInfoManager | Manage Executor's Health Information |
+| NodeHeartbeatMsgManager | Manage the heartbeat information of Executor |
+| NodeOverLoadInfoManager | Manage Executor load information |
+| Listener | Provides events related to Executor and the corresponding listener definition |
+| EngineConnTimedLock | Define Executor level lock |
+| AccessibleService | Provides the start-stop and status acquisition functions of Executor |
+| ExecutorHeartbeatService | Provides heartbeat related functions of Executor |
+| LockService | Provide lock management function |
+| LogService | Provide log management functions |
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-01.png b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-01.png
new file mode 100644
index 0000000..cc83842
Binary files /dev/null and b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-01.png differ
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-02.png b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-02.png
new file mode 100644
index 0000000..303f37a
Binary files /dev/null and b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-02.png differ
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md
new file mode 100644
index 0000000..45ded41
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md
@@ -0,0 +1,45 @@
+EngineConnManager architecture design
+-------------------------
+
+EngineConnManager (ECM): EngineConn's manager, provides engine lifecycle management, and reports load information and its own health status to RM.
+###  ECM architecture
+
+![](Images/ECM-01.png)
+
+###  Introduction to the second-level module
+
+**Linkis-engineconn-linux-launch**
+
+The engine launcher, whose core class is LinuxProcessEngineConnLauch, is used to provide instructions for executing commands.
+
+**Linkis-engineconn-manager-core**
+
+The core module of ECM includes the top-level interface of ECM health report and EngineConn health report function, defines the relevant indicators of ECM service, and the core method of constructing EngineConn process.
+
+| Core top-level interface/class     | Core function                                                            |
+|------------------------------------|--------------------------------------------------------------------------|
+| EngineConn                         | Defines the properties of EngineConn, including methods and parameters   |
+| EngineConnLaunch                   | Define the start method and stop method of EngineConn                    |
+| ECMEvent                           | ECM related events are defined                                           |
+| ECMEventListener                   | Defined ECM related event listeners                                      |
+| ECMEventListenerBus                | Defines the listener bus of ECM                                          |
+| ECMMetrics                         | Defines the indicator information of ECM                                 |
+| ECMHealthReport                    | Defines the health report information of ECM                             |
+| NodeHealthReport                   | Defines the health report information of the node                        |
+
+**Linkis-engineconn-manager-server**
+
+The server side of ECM defines top-level interfaces and implementation classes such as ECM health information processing service, ECM indicator information processing service, ECM registration service, EngineConn start service, EngineConn stop service, EngineConn callback service, etc., which are mainly used for ECM to itself and EngineConn Life cycle management, health information reporting, heartbeat sending, etc.
+Core Service and Features module are as follows:
+
+| Core service                    | Core function                                        |
+|---------------------------------|-------------------------------------------------|
+| EngineConnLaunchService         | Contains core methods for generating EngineConn and starting the process          |
+| BmlResourceLocallizationService | Used to download BML engine related resources and generate localized file directory |
+| ECMHealthService                | Report your own healthy heartbeat to AM regularly                      |
+| ECMMetricsService               | Report your own indicator status to AM regularly                      |
+| EngineConnKillSerivce           | Provides related functions to stop the engine                          |
+| EngineConnListService           | Provide caching and management engine related functions                    |
+| EngineConnCallBackService       | Provide the function of the callback engine                              |
+
+
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md
new file mode 100644
index 0000000..dc82f80
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md
@@ -0,0 +1,68 @@
+EngineConnPlugin (ECP) architecture design
+===============================
+
+The engine connector plug-in is an implementation that can dynamically load the engine connector and reduce the occurrence of version conflicts. It has the characteristics of convenient expansion, fast refresh, and selective loading. In order to allow developers to freely extend Linkis's Engine engine, and dynamically load engine dependencies to avoid version conflicts, the EngineConnPlugin was designed and developed, allowing new engines to be introduced into the execution life cycle of [...]
+The plug-in interface disassembles the definition of the engine, including parameter initialization, allocation of engine resources, construction of engine connections, and setting of engine default tags.
+
+一、ECP architecture diagram
+
+![](../../../Images/Architecture/linkis-engineConnPlugin-01.png)
+
+Introduction to the second-level module:
+==============
+
+EngineConn-Plugin-Server
+------------------------
+
+The engine connector plug-in service is an entrance service that provides external registration plug-ins, management plug-ins, and plug-in resource construction. The engine plug-in that is successfully registered and loaded will contain the logic of resource allocation and startup parameter configuration. During the engine initialization process, EngineConn
+Other services such as Manager call the logic of the corresponding plug-in in Plugin Server through RPC requests.
+
+| Core Class                           | Core Function                              |
+|----------------------------------|---------------------------------------|
+| EngineConnLaunchService          | Responsible for building the engine connector launch request            |
+| EngineConnResourceFactoryService | Responsible for generating engine resources                      |
+| EngineConnResourceService        | Responsible for downloading the resource files used by the engine connector from BML |
+
+
+EngineConn-Plugin-Loader Engine Connector Plugin Loader
+---------------------------------------
+
+The engine connector plug-in loader is a loader used to dynamically load the engine connector plug-ins according to request parameters, and has the characteristics of caching. The specific loading process is mainly composed of two parts: 1) Plug-in resources such as the main program package and program dependency packages are loaded locally (not open). 2) Plug-in resources are dynamically loaded from the local into the service process environment, for example, loaded into the JVM virtual [...]
+| Core Class                          | Core Function                                     |
+|---------------------------------|----------------------------------------------|
+| EngineConnPluginsResourceLoader | Load engine connector plug-in resources                       |
+| EngineConnPluginsLoader         | Load the engine connector plug-in instance, or load an existing one from the cache |
+| EngineConnPluginClassLoader     | Dynamically instantiate engine connector instance from jar              |
+
+EngineConn-Plugin-Cache engine plug-in cache module
+----------------------------------------
+
+Engine connector plug-in cache is a cache service specially used to cache loaded engine connectors, and supports the ability to read, update, and remove. The plug-in that has been loaded into the service process will be cached together with its class loader to prevent multiple loading from affecting efficiency; at the same time, the cache module will periodically notify the loader to update the plug-in resources. If changes are found, it will be reloaded and refreshed automatically Cache.
+
+| Core Class                      | Core Function                     |
+|-----------------------------|------------------------------|
+| EngineConnPluginCache       | Cache loaded engine connector instance |
+| RefreshPluginCacheContainer | Engine connector that refreshes the cache regularly     |
+
+EngineConn-Plugin-Core: Engine connector plug-in core module
+---------------------------------------------
+
+The engine connector plug-in core module is the core module of the engine connector plug-in. Contains the implementation of the basic functions of the engine plug-in, such as the construction of the engine connector start command, the construction of the engine resource factory and the implementation of the core interface of the engine connector plug-in.
+| Core Class                  | Core Function                                                 |
+|-------------------------|----------------------------------------------------------|
+| EngineConnLaunchBuilder | Build Engine Connector Launch Request                                   |
+| EngineConnFactory       | Create Engine Connector                                           |
+| EngineConnPlugin        | The engine connector plug-in implements the interface, including resources, commands, and instance construction methods. |
+| EngineResourceFactory   | Engine Resource Creation Factory                                       |
+
+EngineConn-Plugins: Engine connection plugin collection
+-----------------------------------
+
+The engine connection plug-in collection is used to place the default engine connector plug-in library that has been implemented based on the plug-in interface defined by us. Provides the default engine connector implementation, such as jdbc, spark, python, shell, etc. Users can refer to the implemented cases based on their own needs to implement more engine connectors.
+| Core Class              | Core Function         |
+|---------------------|------------------|
+| engineplugin-jdbc   | jdbc engine connector   |
+| engineplugin-shell  | Shell engine connector  |
+| engineplugin-spark  | spark engine connector  |
+| engineplugin-python | python engine connector |
+
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md
new file mode 100644
index 0000000..dd69274
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md
@@ -0,0 +1,33 @@
+## 1. Background
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The Entrance module of the old version of Linkis is responsible for too much responsibilities, the management ability of the Engine is weak, and it is not easy to follow-up expansion, the AppManager module is newly extracted to complete the following responsibilities:  
+1. Add the AM module to move the engine management function previously done by Entrance to the AM module.
+2. AM needs to support operating Engine, including: adding, multiplexing, recycling, preheating, switching and other functions.
+3. Need to connect to the Manager module to provide Engine management functions: including Engine status maintenance, engine list maintenance, engine information, etc.
+4. AM needs to manage EM services, complete EM registration and forward the resource registration to RM.
+5. AM needs to be connected to the Label module, including the addition and deletion of EM/Engine, the label manager needs to be notified to update the label.
+6. AM also needs to dock the label module for label analysis, and need to obtain a list of serverInstances with a series of scores through a series of labels (How to distinguish between EM and Engine? the labels are completely different).
+7. Need to provide external basic interface: including the addition, deletion and modification of engine and engine manager, metric query, etc.  
+## Architecture diagram
+![AppManager03](./../../../../zh_CN/Images/Architecture/AppManager-03.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As shown in the figure above: AM belongs to the AppManager module in LinkisMaster and provides services.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;New engine application flow chart:  
+![AppManager02](./../../../../zh_CN/Images/Architecture/AppManager-02.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;From the above engine life cycle flow chart, it can be seen that Entrance is no longer doing the management of the Engine, and the startup and management of the engine are controlled by AM.  
+## Architecture description
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;AppManager mainly includes engine service and EM service:
+Engine service includes all operations related to EngineConn, such as engine creation, engine reuse, engine switching, engine recycling, engine stopping, engine destruction, etc.
+EM service is responsible for information management of all EngineConnManager, and can perform service management on ECM online, including tag modification, suspension of ECM service, obtaining ECM instance information, obtaining ECM running engine information, killing ECM operation, and also according to EM Node information Query all EngineNodes, and also support searching by user, saving EM Node load information, node health information, resource usage information, etc.
+The new EngineConnManager and EngineConn both support tag management, and the types of engines have also added offline, streaming, and interactive support.  
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Engine creation: specifically responsible for the new engine function of the LinkisManager service. The engine startup module is fully responsible for the creation of a new engine, including obtaining ECM tag collections, resource requests, obtaining engine startup commands, notifying ECM to create new engines, updating engine lists, etc.
+CreateEngienRequest->RPC/Rest -> MasterEventHandler ->CreateEngineService ->
+->LabelContext/EnginePlugin/RMResourcevice->(RcycleEngineService)EngineNodeManager->EMNodeManager->sender.ask(EngineLaunchRequest)->EngineManager service->EngineNodeManager->EngineLocker->Engine->EngineNodeManager->EngineFactory=&gt;EngineService=&gt; ServerInstance
+When creating an engine is the part that interacts with RM, EnginePlugin should return specific resource types through Labels, and then AM sends resource requests to RM.
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Engine reuse: In order to reduce the time and resources consumed for engine startup, the principle of reuse must be given priority to the use of engines. Reuse generally refers to the reuse of engines that users have created. The engine reuse module is responsible for providing a collection of reusable engines. Election and lock the engine and start using it, or return that there is no engine that can be reused.
+ReuseEngienRequest->RPC/Rest -> MasterEventHandler ->ReuseEngineService ->
+->abelContext->EngineNodeManager->EngineSelector->EngineLocker->Engine->EngineNodeManager->EngineReuser->EngineService=>ServerInstance
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Engine switching: It mainly refers to the label switching of existing engines. For example, when the engine is created, it was created by Creator1. Now it can be changed to Creator2 by engine switching. At this time, you can allow the current engine to receive tasks with the tag Creator2.
+SwitchEngienRequest->RPC/Rest -> MasterEventHandler ->SwitchEngineService ->LabelContext/EnginePlugin/RMResourcevice->EngineNodeManager->EngineLocker->Engine->EngineNodeManager->EngineReuser->EngineService=>ServerInstance.  
+Engine manager: Engine manager is responsible for managing the basic information and metadata information of all engines.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md
new file mode 100644
index 0000000..d8fa39c
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md
@@ -0,0 +1,38 @@
+## LabelManager architecture design
+
+#### Brief description
+LabelManager is a functional module in Linkis that provides label services to upper-level applications. It uses label technology to manage cluster resource allocation, service node election, user permission matching, and gateway routing and forwarding; it includes generalized analysis and processing tools that support various custom Label labels, And a universal tag matching scorer.
+### Overall architecture schematic
+
+![整体架构示意图](../../../Images/Architecture/LabelManager/label_manager_global.png)  
+
+#### Architecture description
+- LabelBuilder: Responsible for the work of label analysis. It can parse the input label type, keyword or character value to obtain a specific label entity. There is a default generalization implementation class or custom extensions.
+- LabelEntities: Refers to a collection of label entities, including cluster labels, configuration labels, engine labels, node labels, routing labels, search labels, etc.
+- NodeLabelService: The associated service interface class of instance/node and label, which defines the interface method of adding, deleting, modifying and checking the relationship between the two and matching the instance/node according to the label.
+- UserLabelService: Declare the associated operation between the user and the label.
+- ResourceLabelService: Declare the associated operations of cluster resources and labels, involving resource management of combined labels, cleaning or setting the resource value associated with the label.
+- NodeLabelScorer: Node label scorer, corresponding to the implementation of different label matching algorithms, using scores to indicate node label matching.
+
+### 1. LabelBuilder parsing process
+Take the generic label analysis class GenericLabelBuilder as an example to clarify the overall process:
+The process of label parsing/construction includes several steps:
+1. According to the input, select the appropriate label class to be parsed.
+2. According to the definition information of the tag class, recursively analyze the generic structure to obtain the specific tag value type.
+3. Convert the input value object to the tag value type, using implicit conversion or positive and negative analysis framework.
+4. According to the return of 1-3, instantiate the label, and perform some post operations according to different label classes.
+
+### 2. NodeLabelScorer scoring process
+In order to select a suitable engine node based on the tag list attached to the Linkis user execution request, it is necessary to make a selection of the matching engine list, which is quantified as the tag matching degree of the engine node, that is, the score.
+In the label definition, each label has a feature value, namely CORE, SUITABLE, PRIORITIZED, OPTIONAL, and each feature value has a boost value, which is equivalent to a weight and an incentive value.
+At the same time, some features such as CORE and SUITABLE must be unique features, that is, strong filtering is required during the matching process, and a node can only be associated with one CORE/SUITABLE label.
+According to the relationship between existing tags, nodes, and request attached tags, the following schematic diagram can be drawn:
+![标签打分](../../../Images/Architecture/LabelManager/label_manager_scorer.png)  
+
+The built-in default scoring logic process should generally include the following steps:
+1. The input of the method should be two sets of network relationship lists, namely `Label -> Node` and `Node -> Label`, where the Node node in the `Node -> Label` relationship must have all the CORE and SUITABLE feature labels, these nodes are also called candidate nodes.
+2. The first step is to traverse and calculate the relationship list of `Node -> Label`, and traverse the label Label associated with each node. In this step, the label is scored first. If the label is not the label attached to the request, the score is 0.
+Otherwise, the score is divided into: (basic score/the number of times the tag corresponds to the feature value in the request) * the incentive value of the corresponding feature value, where the basic score defaults to 1, and the initial score of the node is the sum of the associated tag scores; where because The CORE/SUITABLE type label must be the only label, and the number of occurrences is always 1.
+3. After obtaining the initial score of the node, the second step is to traverse the calculation of the `Label -> Node` relationship. Since the first step ignores the effect of unrequested attached labels on the score, the proportion of irrelevant labels will indeed affect the score. This type of label is unified with the UNKNOWN feature, and this feature also has a corresponding incentive value;
+We set that the higher the proportion of candidate nodes associated with irrelevant labels in the total associated nodes, the more significant the impact on the score, which can further accumulate the initial score of the node obtained in the first step.
+4. Normalize the standard deviation of the scores of the candidate nodes and sort them.
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md
new file mode 100644
index 0000000..d13e6b1
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md
@@ -0,0 +1,41 @@
+LinkisManager Architecture Design
+====================
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As an independent microservice of Linkis, LinkisManager provides AppManager (application management), ResourceManager (resource management), and LabelManager (label management) capabilities. It can support multi-active deployment and has the characteristics of high availability and easy expansion.  
+## 1. Architecture Diagram
+![Architecture Diagram](./../../../../zh_CN/Images/Architecture/LinkisManager/LinkisManager-01.png)  
+### Noun explanation
+- EngineConnManager (ECM): Engine Manager, used to start and manage engines.
+- EngineConn (EC): Engine connector, used to connect the underlying computing engine.
+- ResourceManager (RM): Resource Manager, used to manage node resources.
+## 2. Introduction to the second-level module
+### 2.1. Application management module linkis-application-manager
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;AppManager is used for unified scheduling and management of engines:  
+| Core Interface/Class | Main Function |
+|------------|--------|
+|EMInfoService | Defines EngineConnManager information query and modification functions |
+|EMRegisterService| Defines EngineConnManager registration function |
+|EMEngineService | Defines EngineConnManager's creation, query, and closing functions of EngineConn |
+|EngineAskEngineService | Defines the function of querying EngineConn |
+|EngineConnStatusCallbackService | Defines the function of processing EngineConn status callbacks |
+|EngineCreateService | Defines the function of creating EngineConn |
+|EngineInfoService | Defines EngineConn query function |
+|EngineKillService | Defines the stop function of EngineConn |
+|EngineRecycleService | Defines the recycling function of EngineConn |
+|EngineReuseService | Defines the reuse function of EngineConn |
+|EngineStopService | Defines the self-destruct function of EngineConn |
+|EngineSwitchService | Defines the engine switching function |
+|AMHeartbeatService | Provides EngineConnManager and EngineConn node heartbeat processing functions |
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The process of applying for an engine through AppManager is as follows:  
+![AppManager](./../../../../zh_CN/Images/Architecture/LinkisManager/AppManager-01.png)  
+### 2. Label management module linkis-label-manager
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;LabelManager provides label management and analysis capabilities.  
+| Core Interface/Class | Main Function |
+|------------|--------|
+|LabelService | Provides the function of adding, deleting, modifying and checking labels |
+|ResourceLabelService | Provides resource label management functions |
+|UserLabelService | Provides user label management functions |  
+The LabelManager architecture diagram is as follows:  
+![ResourceManager](./../../../../zh_CN/Images/Architecture/LinkisManager/ResourceManager-01.png)  
+### 4. Monitoring module linkis-manager-monitor
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Monitor provides the function of node status monitoring.
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md
new file mode 100644
index 0000000..cf1b2c9
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md
@@ -0,0 +1,132 @@
+## 1. Background
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ResourceManager (RM for short) is the computing resource management module of Linkis. All EngineConn (EC for short), EngineConnManager (ECM for short), and even external resources including Yarn are managed by RM. RM can manage resources based on users, ECM, or other granularities defined by complex tags.  
+## 2. The role of RM in Linkis
+![01](./../../../../zh_CN/Images/Architecture/rm-01.png)  
+![02](./../../../../zh_CN/Images/Architecture/rm-02.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As a part of Linkis Manager, RM mainly functions as follows: maintain the available resource information reported by ECM, process the resource application submitted by ECM, record the actual resource usage information reported by EC in real time during the life cycle after successful application, and provide query current resource usage The relevant interface of the situation.  
+In Linkis, other services that interact with RM mainly include:  
+1. Engine Manager, ECM for short: Processes the microservices that start the engine connector request. As a resource provider, ECM is responsible for registering and unregistering resources with RM. At the same time, as the manager of the engine, ECM is responsible for applying for resources from RM instead of the new engine connector that is about to start. For each ECM instance, there is a corresponding resource record in the RM, which contains information such as the total resources a [...]
+![03](./../../../../zh_CN/Images/Architecture/rm-03.png)  
+2. The engine connector, referred to as EC, is the actual execution unit of user operations. At the same time, as the actual user of the resource, the EC is responsible for reporting the actual use of the resource to the RM. Each EC has a corresponding resource record in the RM: during the startup process, it is reflected as a locked resource; during the running process, it is reflected as a used resource; after being terminated, the resource record is subsequently deleted.  
+![04](./../../../../zh_CN/Images/Architecture/rm-04.png)  
+## 3. Resource type and format
+![05](./../../../../zh_CN/Images/Architecture/rm-05.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As shown in the figure above, all resource classes implement a top-level Resource interface, which defines the calculation and comparison methods that all resource classes need to support, and overloads the corresponding mathematical operators to enable resources to be Directly calculated and compared like numbers.  
+| Operator | Correspondence Method | Operator | Correspondence Method |
+|--------|-------------|--------|-------------|
+| \+ | add | \> | moreThan |
+| \- | minus | \< | lessThan |
+| \* | multiply | = | equals |
+| / | divide | \>= | notLessThan |
+| \<= | notMoreThan | | |  
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The currently supported resource types are shown in the following table. All resources have corresponding json serialization and deserialization methods, which can be stored in json format and transmitted across the network:  
+
+| Resource Type | Description |
+|-----------------------|--------------------------------------------------------|
+| MemoryResource | Memory Resource |
+| CPUResource | CPU Resource |
+| LoadResource | Both memory and CPU resources |
+| YarnResource | Yarn queue resources (queue, queue memory, queue CPU, number of queue instances) |
+| LoadInstanceResource | Server resources (memory, CPU, number of instances) |
+| DriverAndYarnResource | Driver and executor resources (with server resources and Yarn queue resources at the same time) |
+| SpecialResource | Other custom resources |  
+
+## 4. Available resource management
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The available resources in the RM mainly come from two sources: the available resources reported by the ECM, and the resource limits configured according to tags in the Configuration module.  
+**ECM resource report**:  
+1. When the ECM is started, it will broadcast the ECM registration message. After receiving the message, the RM will register the resource according to the content contained in the message. The resource-related content includes:
+
+     1. Total resources: the total number of resources that the ECM can provide.
+
+     2. Protect resources: When the remaining resources are less than this resource, no further resources are allowed to be allocated.
+
+     3. Resource type: such as LoadResource, DriverAndYarnResource and other type names.
+
+     4. Instance information: machine name plus port name.
+
+2. After RM receives the resource registration request, it adds a record in the resource table, the content is consistent with the parameter information of the interface, and finds the label representing the ECM through the instance information, and adds an association in the resource and label association table recording.
+
+3. When the ECM is closed, it will broadcast a message that the ECM is closed. After receiving the message, the RM will go offline according to the ECM instance information in the message, that is, delete the resource and associated records corresponding to the ECM instance tag.  
+
+**Configuration模块标签资源配置**:  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;In the Configuration module, users can configure the number of resources based on different tag combinations, such as limiting the maximum available resources of the User/Creator/EngineType combination.
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The RM queries the Configuration module for resource information through the RPC message, using the combined tag as the query condition, and converts it into a Resource object to participate in subsequent comparison and recording.  
+
+## 5. Resource Usage Management  
+**Receive user's resource application:**  
+1. When LinkisManager receives a request to start EngineConn, it will call RM's resource application interface to apply for resources. The resource application interface accepts an optional time parameter. When the waiting time for applying for a resource exceeds the limit of the time parameter, the resource application will be automatically processed as a failure.  
+**Judging whether there are enough resources:**  
+That is, to determine whether the remaining available resources are greater than the requested resources, if greater than or equal to, the resources are sufficient; otherwise, the resources are insufficient.
+
+1. RM preprocesses the label information attached to the resource application, and filters, combines and converts the original labels according to the rules (such as combining the User/Creator label and EngineType label), which makes the subsequent resource judgment more granular flexible.
+
+2. Lock each converted label one by one, so that their corresponding resource records remain unchanged during the processing of resource applications.
+
+3. According to each label:
+
+    1. Query the corresponding resource record from the database through the Persistence module. If the record contains the remaining available resources, it is directly used for comparison.
+
+    2. If there is no direct remaining available resource record, it will be calculated by the formula of [Remaining Available Resource=Maximum Available Resource-Used Resource-Locked Resource-Protected Resource].
+
+    3. If there is no maximum available resource record, request the Configuration module to see if there is configured resource information, if so, use the formula for calculation, if not, skip the resource judgment for this tag.
+
+    4. If there is no resource record, skip the resource judgment for this tag.
+
+4. As long as one tag is judged to be insufficient in resources, the resource application will fail, and each tag will be unlocked one by one.
+
+5. Only when all tags are judged to be sufficient resources, can the resource application be successfully passed and proceed to the next step.  
+
+**lock by application of resources:**
+
+1. The number of resource request by generating a new record in the resource table, and associated with each tag.
+
+2. If there is a tag corresponding to the remaining available resource record, the corresponding number of the abatement.
+
+3. Generate a timed task, the lock checks whether these resources are actually used after a certain time, if the timeout is not used, it is mandatory recycling.
+
+4. unlock each tag.
+
+**report the actual use of resources:**
+
+1. EngineConn after the start, broadcast a resource utilization message. RM after receiving the message, check whether the label corresponding to the EngineConn lock resource record, and if not, an error.
+
+2. If you have locked resource, the EngineConn all labels associated lock.
+
+3. For each tag, the resource record corresponding lock record for the conversion of used resources.
+
+4. Unlock all labels.
+
+**Release actual used resources:**
+
+1. EngineConn after the end of the life cycle, recycling broadcast messages. RM after receiving the message, check whether the EngineConn corresponding label resources have been recorded.
+
+2. If so, all the labels associated EngineConn be locked.
+
+3, minus the amount used in the corresponding resource record for each label.
+
+4. If there is a tag corresponding to the remaining available resource record, the corresponding increase in number.
+
+5. The unlocking each tag
+
+## 6. External resource management
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;In RM, in order to classify resources more flexibly and expansively, support multi-cluster resource management and control, and at the same time make it easier to access new external resources, the following considerations have been made in the design:
+
+1. Unified management of resources through tags. After the resource is registered, it is associated with the tag, so that the attributes of the resource can be expanded infinitely. At the same time, resource applications are also tagged to achieve flexible matching.
+
+2. Abstract the cluster into one or more tags, and maintain the environmental information corresponding to each cluster tag in the external resource management module to achieve dynamic docking.
+
+3. Abstract a general external resource management module. If you need to access new external resource types, you can convert different types of resource information into Resource entities in the RM as long as you implement a fixed interface to achieve unified management.  
+![06](./../../../../zh_CN/Images/Architecture/rm-06.png)  
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Other modules of RM obtain external resource information through the interface provided by ExternalResourceService.
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The ExternalResourceService obtains information about external resources through resource types and tags:
+
+1. The type, label, configuration and other attributes of all external resources (such as cluster name, Yarn web
+     url, Hadoop version and other information) are maintained in the linkis\_external\_resource\_provider table.
+
+2. For each resource type, there is an implementation of the ExternalResourceProviderParser interface, which parses the attributes of external resources, converts the information that can be matched to the Label into the corresponding Label, and converts the information that can be used as a parameter to request the resource interface into params . Finally, an ExternalResourceProvider instance that can be used as a basis for querying external resource information is constructed.
+
+3. According to the resource type and label information in the parameters of the ExternalResourceService method, find the matching ExternalResourceProvider, generate an ExternalResourceRequest based on the information in it, and formally call the API provided by the external resource to initiate a resource information request.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/README.md
new file mode 100644
index 0000000..343b7b2
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/README.md
@@ -0,0 +1,40 @@
+## Background
+**The architecture of Linkis0.X mainly has the following problems**  
+1. The boundary between the core processing flow and the hierarchical module is blurred:  
+- Entrance and EngineManager function boundaries are blurred.
+
+- The main process of task submission and execution is not clear enough.
+
+- It is troublesome to extend the new engine, and it needs to implement the code of multiple modules.
+
+- Only support computing request scenarios, storage request scenarios and resident service mode (Cluster) are difficult to support.  
+2. Demands for richer and more powerful computing governance functions:  
+- Insufficient support for computing task management strategies.
+
+- The labeling capability is not strong enough, which restricts computing strategies and resource managemen.  
+
+The new architecture of Linkis1.0 computing governance service can solve these problems well.  
+## Architecture Diagram  
+![linkis Computation Gov](./../../../zh_CN/Images/Architecture/linkis-computation-gov-01.png)  
+**Operation process optimization:** Linkis1.0 will optimize the overall execution process of the job, from submission —\> preparation —\>
+Perform three stages to fully upgrade Linkis's Job execution architecture, as shown in the following figure:  
+![](./../../../zh_CN/Images/Architecture/linkis-computation-gov-02.png)  
+## Architecture Description
+### 1. Entrance
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Entrance, as the submission portal for computing tasks, provides task reception, scheduling and job information forwarding capabilities. It is a native capability split from Linkis0.X's Entrance.  
+[Entrance Architecture Design](./Entrance/Entrance.md)  
+### 2. Orchestrator
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Orchestrator, as the entrance to the preparation phase, inherits the capabilities of parsing Jobs, applying for Engines, and submitting execution from Entrance of Linkis0.X; at the same time, Orchestrator will provide powerful orchestration and computing strategy capabilities to meet multiple activities, active backups, transactions, and replays. , Current limiting, heterogeneous and mixed computing and other application scenarios.  
+[Enter Orchestrator Architecture Design](../Orchestrator/README.md)  
+### 3. LinkisManager
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As the management brain of Linkis, LinkisManager is mainly composed of AppManager, ResourceManager, LabelManager and EngineConnPlugin.  
+1. ResourceManager not only has Linkis0.X's resource management capabilities for Yarn and Linkis EngineManager, but also provides tag-based multi-level resource allocation and recycling capabilities, allowing ResourceManager to have full resource management capabilities across clusters and across computing resource types;
+2. AppManager will coordinate and manage all EngineConnManager and EngineConn. The life cycle of EngineConn application, reuse, creation, switching, and destruction will be handed over to AppManager for management; and LabelManager will provide cross-IDC and cross-cluster based on multi-level combined tags EngineConn and EngineConnManager routing and management capabilities;
+3. EngineConnPlugin is mainly used to reduce the access cost of new computing storage, so that users can access a new computing storage engine only by implementing one class.  
+ [Enter LinkisManager Architecture Design](./LinkisManager/README.md)  
+### 4. Engine Manager
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Engine conn Manager (ECM) is a simplified and upgraded version of linkis0. X engine manager. The ECM under linkis1.0 removes the application ability of the engine, and the whole microservice is completely stateless. It will focus on supporting the startup and destruction of all kinds of enginecon.  
+[Enter EngineConnManager Architecture Design](./EngineConnManager/README.md)  
+ ### 5. EngineConn
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConn is an optimized and upgraded version of Linkis0.X Engine. It will provide EngineConn and Executor two modules. EngineConn is used to connect the underlying computing storage engine and provide a session session that connects the underlying computing storage engines; Executor is based on this Session session , Provide full-stack computing support for interactive computing, streaming computing, offline computing, and data storage.  
+ [Enter EngineConn Architecture Design](./EngineConn/README.md)
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/DifferenceBetween1.0&0.x.md b/Linkis-Doc-master/en_US/Architecture_Documents/DifferenceBetween1.0&0.x.md
new file mode 100644
index 0000000..0965b0c
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/DifferenceBetween1.0&0.x.md
@@ -0,0 +1,50 @@
+## 1. Brief Description
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;First of all, the Entrance and EngineConnManager (formerly EngineManager) services under the Linkis1.0 architecture are completely unrelated to the engine. That is, under the Linkis1.0 architecture, each engine does not need to be implemented and started the corresponding Entrance and EngineConnManager, and Linkis1.0’s Each Entrance and EngineConnManager can be shared by all engines.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Secondly, Linkis1.0 added the Linkis-Manager service to provide external AppManager (application management), ResourceManager (resource management, the original ResourceManager service) and LabelManager (label management) capabilities.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Then, in order to reduce the difficulty of implementing and deploying a new engine, Linkis 1.0 re-architects a module called EngineConnPlugin. Each new engine only needs to implement the EngineConnPlugin interface.Linkis EngineConnPluginServer supports dynamic loading of EngineConnPlugin (new engine) in the form of a plug-in. Once EngineConnPluginServer is successfully loaded, EngineConnManager can quickly start an instance of the engine fo [...]
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Finally, all the microservices of Linkis are summarized and classified, which are generally divided into three major levels: public enhancement services, computing governance services and microservice governance services, from the code hierarchy, microservice naming and installation directory structure, etc. To standardize the microservice system of Linkis1.0.  
+##  2. Main Feature
+1. **Strengthen computing governance**, Linkis 1.0 mainly strengthens the comprehensive management and control capabilities of computing governance from engine management, label management, ECM management, and resource management. It is based on the powerful management and control design concept of labeling. This makes Linkis 1.0 a solid step towards multi-IDC, multi-cluster, and multi-container.  
+2. **Simplify user implementation of new engines**, EnginePlugin is used to integrate the related interfaces and classes that need to be implemented to implement a new engine, as well as the Entrance-EngineManager-Engine three-tier module system that needs to be split into one interface. , Simplify the process and code for users to implement the new engine, so that as long as one class is implemented, a new engine can be connected.  
+3. **Full-stack computing storage engine support**, to achieve full coverage support for computing request scenarios (such as Spark), storage request scenarios (such as HBase), and resident cluster services (such as SparkStreaming).  
+4. **Improved advanced computing strategy capability**, add Orchestrator to implement rich computing task management strategies, and support tag-based analysis and orchestration.  
+## 3. Service Comparison
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Please refer to the following two pictures:  
+![Linkis0.X Service List](./../Images/Architecture/Linkis0.X-services-list.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The list of Linkis1.0 microservices is as follows:  
+![Linkis1.0 Service List](./../Images/Architecture/Linkis1.0-services-list.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;From the above two figures, Linkis1.0 divides services into three types of services: Computing Governance (CG)/Micro Service Governance (MG)/Public Enhanced Service (PS). among them:  
+1. A major change in computing governance is that Entrance and EngineConnManager services are no longer related to engines. To implement a new engine, only the EngineConnPlugin plug-in needs to be implemented. EngineConnPluginServer will dynamically load the EngineConnPlugin plug-in to achieve engine hot-plug update;
+2. Another major change in computing governance is that LinkisManager, as the management brain of Linkis, abstracts and defines AppManager (application management), ResourceManager (resource management) and LabelManager (label management);
+3. Microservice management service, merged and unified the Eureka and Gateway services in the 0.X part, and enhanced the functions of the Gateway service to support routing and forwarding according to Label;
+4. Public enhancement services, mainly to optimize and unify the BML services/context services/data source services/public services of the 0.X part, which is convenient for everyone to manage and view.  
+## 4. Introduction To Linkis Manager
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As the management brain of Linkis, Linkis Manager is mainly composed of AppManager, ResourceManager and LabelManager.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ResourceManager not only has Linkis0.X's resource management capabilities for Yarn and Linkis EngineManager, but also provides tag-based multi-level resource allocation and recycling capabilities, allowing ResourceManager to have full resource management capabilities across clusters and across computing resource types.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;AppManager will coordinate and manage all EngineConnManager and EngineConn, and the life cycle of EngineConn application, reuse, creation, switching, and destruction will be handed over to AppManager for management.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The LabelManager will provide cross-IDC and cross-cluster EngineConn and EngineConnManager routing and management capabilities based on multi-level combined tags.  
+## 5. Introduction To Linkis EngineConnPlugin
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConnPlugin is mainly used to reduce the cost of access and deployment of new computing storage. It truly enables users to “just need to implement a class to connect to a new computing storage engine; just execute a script to quickly deploy a new engine ".  
+### 5.1 New Engine Implementation Comparison
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The following are the relevant interfaces and classes that the user Linkis0.X needs to implement to implement a new engine:  
+![Linkis0.X How to implement a brand new engine](./../Images/Architecture/Linkis0.X-NewEngine-architecture.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The following is Linkis 1.0.0, which implements a new engine, the interfaces and classes that users need to implement:  
+![Linkis1.0 How to implement a brand new engine](./../Images/Architecture/Linkis1.0-NewEngine-architecture.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Among them, EngineConnResourceFactory and EngineLaunchBuilder are not required to implement interfaces, and only EngineConnFactory is required to implement interfaces.  
+### 5.2 New engine startup process
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConnPlugin provides the Server service to start and load all engine plug-ins. The following is a new engine startup that accesses the entire process of EngineConnPlugin-Server:  
+![Linkis Engine start process](./../Images/Architecture/Linkis1.0-newEngine-initialization.png)  
+## 6. Introduction To Linkis EngineConn
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConn, the original Engine module, is the actual unit for Linkis to connect and interact with the underlying computing storage engine, and is the basis for Linkis to provide computing and storage capabilities.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConn of Linkis1.0 is mainly composed of EngineConn and Executor. among them:  
+
+1. EngineConn is the connector, which contains the session information between the engine and the specific cluster. It only acts as a connection, a client, and does not actually perform calculations.  
+
+2. Executor is the executor. As a real computing scene executor, it is the actual computing logic execution unit, and it also abstracts various specific capabilities of the engine, such as providing various services such as locking, access status, and log acquisition.
+
+3. Executor is created by the session information in EngineConn. An engine type can support multiple different types of computing tasks, each corresponding to the implementation of an Executor, and the computing task will be submitted to the corresponding Executor for execution.  In this way, the same engine can provide different services according to different computing scenarios. For example, the permanent engine does not need to be locked after it is started, and the one-time engine d [...]
+
+4. The advantage of using the separation of Executor and EngineConn is that it can avoid the Receiver coupling business logic, and only retains the RPC communication function. Distribute services in multiple Executor modules, and abstract them into several categories of engines: interactive computing engines, streaming engines, disposable engines, etc., which may be used, and build a unified engine framework for later expansion.
+In this way, different types of engines can respectively load the required capabilities according to their needs, which greatly reduces the redundancy of engine implementation.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As shown below:  
+![Linkis EngineConn Architecture diagram](./../Images/Architecture/Linkis1.0-EngineConn-architecture.png)
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/How_to_add_an_EngineConn.md b/Linkis-Doc-master/en_US/Architecture_Documents/How_to_add_an_EngineConn.md
new file mode 100644
index 0000000..c28635b
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/How_to_add_an_EngineConn.md
@@ -0,0 +1,105 @@
+# How to add an EngineConn
+
+Adding EngineConn is one of the core processes of the computing task preparation phase of Linkis computing governance. It mainly includes the following steps. First, client side (Entrance or user client) initiates a request for a new EngineConn to LinkisManager . Then LinkisManager initiates a request to EngineConnManager to start EngineConn based on demands and label rules. Finally,  LinkisManager returns the usable EngineConn to the client side.
+
+Based on the figure below, let's explain the whole process in detail:
+
+![Process of adding a EngineConn](../Images/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png)
+
+## 1. LinkisManger receives the requests from client side
+
+**Glossary:**
+
+- LinkisManager: The management center of Linkis computing governance capabilities. Its main responsibilities are:
+  1. Based on multi-level combined tags, provide users with available EngineConn after complex routing, resource management and load balancing.
+
+  2. Provide EC and ECM full life cycle management capabilities.
+
+  3. Provide users with multi-Yarn cluster resource management functions based on multi-level combined tags. It is mainly divided into three modules: AppManager, ResourceManager and LabelManager , which can support multi-active deployment and have the characteristics of high availability and easy expansion.
+
+After the AM module receives the Client’s new EngineConn request, it first checks the parameters of the request to determine the validity of the request parameters. Secondly, selects the most suitable EngineConnManager (ECM) through complex rules for use in the subsequent EngineConn startup. Next, it will apply to RM for the resources needed to start the EngineConn, Finally, it will request the ECM to create an EngineConn.
+
+The four steps will be described in detail below.
+
+### 1. Request parameter verification
+
+After the AM module receives the engine creation request, it will check the parameters. First, it will check the permissions of the requesting user and the creating user, and then check the Label attached to the request. Since in the subsequent creation process of AM, Label will be used to find ECM and perform resource information recording, etc, you need to ensure that you have the necessary Label. At this stage, you must bring the Label with UserCreatorLabel (For example: hadoop-IDE) a [...]
+
+### 2. Select  a EngineConnManager(ECM)
+
+ECM selection is mainly to complete the Label passed through the client to select a suitable ECM service to start EngineConn. In this step, first, the LabelManager will be used to search in the registered ECM through the Label passed by the client, and return in the order according to the label matching degree. After obtaining the registered ECM list, rules will be selected for these ECMs. At this stage, rules such as availability check, resource surplus, and machine load have been imple [...]
+
+### 3. Apply resources required for EngineConn
+
+1. After obtaining the assigned ECM, AM will then request how many resources will be used by the client's engine creation request by calling the EngineConnPluginServer service. Here, the resource request will be encapsulated, mainly including Label, the EngineConn startup parameters passed by the Client, and the user configuration parameters obtained from the Configuration module. The resource information is obtained by calling the ECP service through RPC.
+
+2. After the EngineConnPluginServer service receives the resource request, it will first find the corresponding engine tag through the passed tag, and select the EngineConnPlugin of the corresponding engine through the engine tag. Then use EngineConnPlugin's resource generator to calculate the engine startup parameters passed in by the client, calculate the resources required to apply for a new EngineConn this time, and then return it to LinkisManager. 
+
+   **Glossary:**
+
+- EgineConnPlugin: It is the interface that Linkis must implement when connecting a new computing storage engine. This interface mainly includes several capabilities that this EngineConn must provide during the startup process, including EngineConn resource generator, EngineConn startup command generator, EngineConn engine connection Device. Please refer to the Spark engine implementation class for the specific implementation: [SparkEngineConnPlugin](https://github.com/WeBankFinTech/Link [...]
+- EngineConnPluginServer: It is a microservice that loads all the EngineConnPlugins and provides externally the required resource generation capabilities of EngineConn and EngineConn's startup command generation capabilities.
+- EngineConnResourceFactory: Calculate the total resources needed when EngineConn starts this time through the parameters passed in.
+- EngineConnLaunchBuilder: Through the incoming parameters, a startup command of the EngineConn is generated to provide the ECM to start the engine.
+3. After AM obtains the engine resources, it will then call the RM service to apply for resources. The RM service will use the incoming Label, ECM, and the resources applied for this time to make resource judgments. First, it will judge whether the resources of the client corresponding to the Label are sufficient, and then judge whether the resources of the ECM service are sufficient, if the resources are sufficient, the resource application is approved this time, and the resources of th [...]
+
+### 4. Request ECM for engine creation
+
+1. After completing the resource application for the engine, AM will encapsulate the engine startup request, send it to the corresponding ECM via RPC for service startup, and obtain the instance object of EngineConn.
+2. AM will then determine whether EngineConn is successfully started and become available through the reported information of EngineConn. If it is, the result will be returned, and the process of adding an engine this time will end.
+
+## 2. ECM initiates EngineConn
+
+**Glossary:**
+
+- EngineConnManager: EngineConn's manager. Provides engine life-cycle management, and at the same time reports load information and its own health status to RM.
+- EngineConnBuildRequest: The start engine command passed by LinkisManager to ECM, which encapsulates all tag information, required resources and some parameter configuration information of the engine.
+- EngineConnLaunchRequest: Contains the BML materials, environment variables, ECM required local environment variables, startup commands and other information required to start an EngineConn, so that ECM can build a complete EngineConn startup script based on this.
+
+After ECM receives the EngineConnBuildRequest command passed by LinkisManager, it is mainly divided into three steps to start EngineConn: 
+
+1. Request EngineConnPluginServer to obtain EngineConnLaunchRequest encapsulated by EngineConnPluginServer. 
+2.  Parse EngineConnLaunchRequest and encapsulate it into EngineConn startup script.
+3.  Execute startup script to start EngineConn.
+
+### 2.1 EngineConnPluginServer encapsulates EngineConnLaunchRequest
+
+Get the EngineConn type and corresponding version that actually needs to be started through the label information of EngineConnBuildRequest, get the EngineConnPlugin of the EngineConn type from the memory of EngineConnPluginServer, and convert the EngineConnBuildRequest into EngineConnLaunchRequest through the EngineConnLaunchBuilder of the EngineConnPlugin.
+
+### 2.2 Encapsulate EngineConn startup script
+
+After the ECM obtains the EngineConnLaunchRequest, it downloads the BML materials in the EngineConnLaunchRequest to the local, and checks whether the local necessary environment variables required by the EngineConnLaunchRequest exist. After the verification is passed, the EngineConnLaunchRequest is encapsulated into an EngineConn startup script.
+
+### 2.3 Execute startup script
+
+Currently, ECM only supports Bash commands for Unix systems, that is, only supports Linux systems to execute the startup script.
+
+Before startup, the sudo command is used to switch to the corresponding requesting user to execute the script to ensure that the startup user (ie, JVM user) is the requesting user on the Client side.
+
+After the startup script is executed, ECM will monitor the execution status and execution log of the script in real time. Once the execution status returns to non-zero, it will immediately report EngineConn startup failure to LinkisManager and the entire process is complete; otherwise, it will keep monitoring the log and status of the startup script until The script execution is complete.
+
+## 3. EngineConn initialization
+
+After ECM executed EngineConn's startup script, EngineConn microservice was officially launched.
+
+**Glossary:**
+
+- EngineConn microservice: Refers to the actual microservices that include an EngineConn and one or more Executors to provide computing power for computing tasks. When we talk about adding an EngineConn, we actually mean adding an EngineConn microservice.
+- EngineConn: The engine connector is the actual connection unit with the underlying computing storage engine, and contains the session information with the actual engine. The difference between it and Executor is that EngineConn only acts as a connection and a client, and does not actually perform calculations. For example, SparkEngineConn, its session information is SparkSession.
+- Executor: As a real computing storage scenario executor, it is the actual computing storage logic execution unit. It abstracts the various capabilities of EngineConn and provides multiple different architectural capabilities such as interactive execution, subscription execution, and responsive execution.
+
+The initialization of EngineConn microservices is generally divided into three stages:
+
+1. Initialize the EngineConn of the specific engine. First use the command line parameters of the Java main method to encapsulate an EngineCreationContext that contains relevant label information, startup information, and parameter information, and initialize EngineConn through EngineCreationContext to complete the establishment of the connection between EngineConn and the underlying Engine, such as: SparkEngineConn will initialize one at this stage SparkSession is used to establish a co [...]
+2. Initialize the Executor. After the EngineConn is initialized, the corresponding Executor will be initialized according to the actual usage scenario to provide service capabilities for subsequent users. For example, the SparkEngineConn in the interactive computing scenario will initialize a series of Executors that can be used to submit and execute SQL, PySpark, and Scala code capabilities, and support the Client to submit and execute SQL, PySpark, Scala and other codes to the SparkEng [...]
+3. Report the heartbeat to LinkisManager regularly, and wait for EngineConn to exit. When the underlying engine corresponding to EngineConn is abnormal, or exceeds the maximum idle time, or Executor is executed, or the user manually kills, the EngineConn automatically ends and exits.
+
+----
+
+At this point, The process of how to add a new EngineConn is basically over. Finally, let's make a summary:
+
+- The client initiates a request for adding EngineConn to LinkisManager.
+- LinkisManager checks the legitimacy of the parameters, first selects the appropriate ECM according to the label, then confirms the resources required for this new EngineConn according to the user's request, applies for resources from the RM module of LinkisManager, and requires ECM to start a new EngineConn as required after the application is passed.
+- ECM first requests EngineConnPluginServer to obtain an EngineConnLaunchRequest containing BML materials, environment variables, ECM required local environment variables, startup commands and other information needed to start an EngineConn, and then encapsulates the startup script of EngineConn, and finally executes the startup script to start the EngineConn.
+- EngineConn initializes the EngineConn of a specific engine, and then initializes the corresponding Executor according to the actual usage scenario, and provides service capabilities for subsequent users. Finally, report the heartbeat to LinkisManager regularly, and wait for the normal end or termination by the user.
+
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Job_submission_preparation_and_execution_process.md b/Linkis-Doc-master/en_US/Architecture_Documents/Job_submission_preparation_and_execution_process.md
new file mode 100644
index 0000000..adb2628
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Job_submission_preparation_and_execution_process.md
@@ -0,0 +1,138 @@
+# Job submission, preparation and execution process
+
+The submission and execution of computing tasks (Job) is the core capability provided by Linkis. It almost colludes with all modules in the Linkis computing governance architecture and occupies a core position in Linkis.
+
+The whole process, starting at submitting user's computing tasks from the client and ending with returning final results, is divided into three stages: submission -> preparation -> executing. The details are shown in the following figure.
+
+![The overall flow chart of computing tasks](../Images/Architecture/Job_submission_preparation_and_execution_process/overall.png)
+
+Among them:
+
+- Entrance, as the entrance to the submission stage, provides task reception, scheduling and job information forwarding capabilities. It is the unified entrance for all computing tasks. It will forward computing tasks to Orchestrator for scheduling and execution.
+- Orchestrator, as the entrance to the preparation phase, mainly provides job analysis, orchestration and execution capabilities.
+- Linkis Manager: The management center of computing governance capabilities. Its main responsibilities are as follow:
+
+  1. ResourceManager:Not only has the resource management capabilities of Yarn and Linkis EngineConnManager, but also provides tag-based multi-level resource allocation and recovery capabilities, allowing ResourceManager to have full resource management capabilities across clusters and across computing resource types;
+  2. AppManager:  Coordinate and manage all EngineConnManager and EngineConn, including the life cycle of EngineConn application, reuse, creation, switching, and destruction to AppManager for management;
+  3. LabelManager: Based on multi-level combined labels, it will provide label support for the routing and management capabilities of EngineConn and EngineConnManager across IDC and across clusters;
+  4. EngineConnPluginServer: Externally provides the resource generation capabilities required to start an EngineConn and EngineConn startup command generation capabilities.
+- EngineConnManager: It is the manager of EngineConn, which provides engine life-cycle management, and at the same time reports load information and its own health status to RM.
+- EngineConn: It is the actual connector between Linkis and the underlying computing storage engines. All user computing and storage tasks will eventually be submitted to the underlying computing storage engine by EngineConn. According to different user scenarios, EngineConn provides full-stack computing capability framework support for interactive computing, streaming computing, off-line computing, and data storage tasks.
+
+## 1. Submission Stage
+
+The submission phase is mainly the interaction of Client -> Linkis Gateway -> Entrance, and the process is as follows:
+
+![Flow chart of submission phase](../Images/Architecture/Job_submission_preparation_and_execution_process/submission.png)
+
+1. First, the Client (such as the front end or the client) initiates a Job request, and the job request information is simplified as follows (for the specific usage of Linkis, please refer to [How to use Linkis](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/en_US/User_Manual/How_To_Use_Linkis.md)):
+```
+POST /api/rest_j/v1/entrance/submit
+```
+
+```json
+{
+    "executionContent": {"code": "show tables", "runType": "sql"},
+    "params": {"variable": {}, "configuration": {}},  //非必须
+    "source": {"scriptPath": "file:///1.hql"}, //非必须,仅用于记录代码来源
+    "labels": {
+        "engineType": "spark-2.4.3",  //指定引擎
+        "userCreator": "johnnwnag-IDE"  // 指定提交用户和提交系统
+    }
+}
+```
+
+2. After Linkis-Gateway receives the request, according to the serviceName in the URI ``/api/rest_j/v1/${serviceName}/.+``, it will confirm the microservice name for routing and forwarding. Here Linkis-Gateway will parse out the  name as entrance and  Job is forwarded to the Entrance microservice. It should be noted that if the user specifies a routing label, the Entrance microservice instance with the corresponding label will be selected for forwarding according to the routing label ins [...]
+3. After Entrance receives the Job request, it will first simply verify the legitimacy of the request, then use RPC to call JobHistory to persist the job information, and then encapsulate the Job request as a computing task, put it in the scheduling queue, and wait for it to be consumed by consumption thread.
+4. The scheduling queue will open up a consumption queue and a consumption thread for each group. The consumption queue is used to store the user computing tasks that have been preliminarily encapsulated. The consumption thread will continue to take computing tasks from the consumption queue for consumption in a FIFO manner. The current default grouping method is Creator + User (that is, submission system + user). Therefore, even if it is the same user, as long as it is a computing task  [...]
+5. After the consuming thread takes out the calculation task, it will submit the calculation task to Orchestrator, which officially enters the preparation phase.
+
+## 2. Preparation Stage
+
+There are two main processes in the preparation phase. One is to apply for an available EngineConn from LinkisManager to submit and execute the following computing tasks. The other is Orchestrator to orchestrate the computing tasks submitted by Entrance, and to convert a user's computing request into a physical execution tree and handed over to the execution phase where a computing task actually being executed. 
+
+#### 2.1 Apply to LinkisManager for available EngineConn
+
+If the user has a reusable EngineConn in LinkisManager, the EngineConn is directly locked and returned to Orchestrator, and the entire application process ends.
+
+How to define a reusable EngineConn? It refers to those that can match all the label requirements of the computing task, and the EngineConn's own health status is Healthy (the load is low and the actual status is Idle). Then, all the EngineConn that meets the conditions are sorted and selected according to the rules, and finally the best one is locked.
+
+If the user does not have a reusable EngineConn, a process to request a new EngineConn will be triggered at this time. Regarding the process, please refer to: [How to add an EngineConn](How_to_add_an_EngineConn.md).
+
+#### 2.2 Orchestrate a computing task
+
+Orchestrator is mainly responsible for arranging a computing task (JobReq) into a physical execution tree (PhysicalTree) that can be actually executed, and providing the execution capabilities of the Physical tree.
+
+Here we first focus on Orchestrator's computing task scheduling capabilities. A flow chart is shown below:
+
+![Orchestration flow chart](../Images/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png)
+
+The main process is as follows:
+
+- Converter: Complete the conversion of the JobReq (task request) submitted by the user to Orchestrator's ASTJob. This step will perform parameter check and information supplementation on the calculation task submitted by the user, such as variable replacement, etc.
+- Parser: Complete the analysis of ASTJob. Split ASTJob into an AST tree composed of ASTJob and ASTStage.
+- Validator: Complete the inspection and information supplement of ASTJob and ASTStage, such as code inspection, necessary Label information supplement, etc.
+- Planner: Convert an AST tree into a Logical tree. The Logical tree at this time has been composed of LogicalTask, which contains all the execution logic of the entire computing task.
+- Optimizer: Convert a Logical tree to a Physica tree and optimize the Physical tree.
+
+In a physical tree, the majority of nodes are computing strategy logic. Only the middle ExecTask truly encapsulates the execution logic which will be further submitted to and executed at EngineConn. As shown below:
+
+![Physical Tree](../Images/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png)
+
+Different computing strategies have different execution logics encapsulated by JobExecTask and StageExecTask in the Physical tree.
+
+The execution logic encapsulated by JobExecTask and StageExecTask in the Physical tree depends on the  specific type of computing strategy.
+
+For example, under the multi-active computing strategy, for a computing task submitted by a user, the execution logic submitted to EngineConn of different clusters for execution is encapsulated in two ExecTasks, and the related strategy logic is reflected in the parent node (StageExecTask(End)) of the two ExecTasks.
+
+Here, we take the multi-reading scenario under the multi-active computing strategy as an example.
+
+In multi-reading scenario, only one result of ExecTask is required to return. Once the result is returned , the Physical tree can be marked as successful. However, the Physical tree only has the ability to execute sequentially according to dependencies, and cannot terminate the execution of each node. Once a node is canceled or fails to execute, the entire Physical tree will be marked as failure. At this time, StageExecTask (End) is needed to ensure that the Physical tree can not only ca [...]
+
+The orchestration process of Linkis Orchestrator is similar to many SQL parsing engines (such as Spark, Hive's SQL parser). But in fact, the orchestration capability of Linkis Orchestrator is realized based on the computing governance field for the different computing governance needs of users. The SQL parsing engine is a parsing orchestration oriented to the SQL language. Here is a simple distinction:
+
+1. What Linkis Orchestrator mainly wants to solve is the orchestration requirements caused by different computing tasks for computing strategies. For example, in order to be multi-active, Orchestrator will submit a calculation task for the user, based on the "multi-active" computing strategy requirements, compile a physical tree, so as to submit to multiple clusters to perform this calculation task. And in the process of constructing the entire Physical tree, various possible abnormal sc [...]
+2. The orchestration ability of Linkis Orchestrator has nothing to do with the programming language. In theory, as long as an engine has adapted to Linkis, all the programming languages it supports can be orchestrated, while the SQL parsing engine only cares about the analysis and execution of SQL, and is only responsible for parsing a piece of SQL into one executable Physical tree, and finally calculate the result.
+3. Linkis Orchestrator also has the ability to parse SQL, but SQL parsing is just one of Orchestrator Parser's analytic implementations for the SQL programming language. The Parser of Linkis Orchestrator also considers introducing Apache Calcite to parse SQL. It supports splitting a user SQL that spans multiple computing engines (must be a computing engine that Linkis has docked) into multiple sub SQLs and submitting them to each corresponding engine during the execution phase. Finally,  [...]
+
+Please refer to [Orchestrator Architecture Design](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/en_US/Architecture_Documents/Orchestrator/Orchestrator_architecture_doc.md) for more details. 
+
+After the analysis and arrangement of Linkis Orchestrator, the  computing task has been transformed into a executable physical tree. Orchestrator will submit the Physical tree to Orchestrator's Execution module and enter the final execution stage.
+
+## 3. Execution Stage
+
+The execution stage is mainly divided into the following two steps, these two steps are the last two phases of capabilities provided by Linkis Orchestrator:
+
+![Flow chart of the execution stage](../Images/Architecture/Job_submission_preparation_and_execution_process/execution.png)
+
+The main process is as follows:
+
+- Execution: Analyze the dependencies of the Physical tree, and execute them sequentially from the leaf nodes according to the dependencies.
+- Reheater: Once the execution of a node in the Physical tree is completed, it will trigger a reheat. Reheating allows the physical tree to be dynamically adjusted according to the real-time execution.For example: it is detected that a leaf node fails to execute, and it supports retry (if it is caused by throwing ReTryExecption), the Physical tree will be automatically adjusted, and a retry parent node with exactly the same content is added to the leaf node .
+
+Let us go back to the Execution stage, where we focus on the execution logic of the ExecTask node that encapsulates the user computing task submitted to EngineConn.
+
+1. As mentioned earlier, the first step in the preparation phase is to obtain a usable EngineConn from LinkisManager. After ExecTask gets this EngineConn, it will submit the user's computing task to EngineConn through an RPC request.
+2. After EngineConn receives the computing task, it will asynchronously submit it to the underlying computing storage engine through the thread pool, and then immediately return an execution ID.
+3. After ExecTask gets this execution ID, it can then use the this ID to asynchronously pull the execution status of the computing task (such as: status, progress, log, result set, etc.).
+4. At the same time, EngineConn will monitor the execution of the underlying computing storage engine in real time through multiple registered Listeners. If the computing storage engine does not support registering Listeners, EngineConn will start a daemon thread for the computing task and periodically pull the execution status from the computing storage engine.
+5. EngineConn will pull the execution status back to the microservice where Orchestrator is located in real time through RCP request.
+6. After the Receiver of the microservice receives the execution status, it will broadcast it through the ListenerBus, and the Orchestrator Execution will consume the event and dynamically update the execution status of the Physical tree.
+7. The result set generated by the calculation task will be written to storage media such as HDFS at the EngineConn side. EngineConn returns only the result set path through RPC, Execution consumes the event, and broadcasts the obtained result set path through ListenerBus, so that the Listener registered by Entrance with Orchestrator can consume the result set path and write the result set path Persist to JobHistory.
+8. After the execution of the computing task on the EngineConn side is completed, through the same logic, the Execution will be triggered to update the state of the ExecTask node of the Physical tree, so that the Physical tree will continue to execute until the entire tree is completely executed. At this time, Execution will broadcast the completion status of the calculation task through ListenerBus.
+9. After the Entrance registered Listener with the Orchestrator consumes the state event, it updates the job state to JobHistory, and the entire task execution is completed.
+
+----
+
+Finally, let's take a look at how the client side knows the state of the calculation task and obtains the calculation result in time, as shown in the following figure:
+
+![Results acquisition process](../Images/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png)
+
+The specific process is as follows:
+
+1. The client periodically polls to request Entrance to obtain the status of the computing task.
+2. Once the status is flipped to success, it sends a request for job information to JobHistory, and gets all the result set paths.
+3. Initiate a query file content request to PublicService through the result set path, and obtain the content of the result set.
+
+Since then, the entire process of  job submission -> preparation -> execution have been completed.
+
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/Gateway.md b/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/Gateway.md
new file mode 100644
index 0000000..02c1db2
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/Gateway.md
@@ -0,0 +1,34 @@
+## Gateway Architecture Design
+
+#### Brief
+The Gateway is the primary entry point for Linkis to accept client and external requests, such as receiving job execution requests, and then forwarding the execution requests to specific eligible Entrance services.
+The bottom layer of the entire architecture is implemented based on "SpringCloudGateway". The upper layer is superimposed with module designs related to Http request parsing, session permissions, label routing and WebSocket multiplex forwarding. The overall architecture can be seen as follows.
+### Architecture Diagram
+
+![Gateway diagram of overall architecture](../../Images/Architecture/Gateway/gateway_server_global.png)
+
+#### Architecture Introduction
+- gateway-core: Gateway's core interface definition module, mainly defines the "GatewayParser" and "GatewayRouter" interfaces, corresponding to request parsing and routing according to the request; at the same time, it also provides the permission verification tool class named "SecurityFilter".
+- spring-cloud-gateway: This module integrates all dependencies related to "SpringCloudGateway", process and forward requests of the HTTP and WebSocket protocol types respectively.
+- gateway-server-support: The driver module of Gateway, relies on the spring-cloud-gateway module to implement "GatewayParser" and "GatewayRouter" respectively, among which "DefaultLabelGatewayRouter" provides the function of label routing.
+- gateway-httpclient-support: Provides a client-side generic class for Http to access Gateway services, which can be implemented based on more.
+- instance-label: External instance label module, providing service interface named "InsLabelService" which used to create routing labels and associate with application instances.
+
+The detailed design involved is as follows:
+
+#### 1、Request Routing And Forwarding (With Label Information)
+First, after the dispatcher of "SpringCloudGateway", the request enters the filter list of the gateway, and then enters the two main logic of "GatewayAuthorizationFilter" and "SpringCloudGatewayWebsocketFilter". 
+The filter integrates "DefaultGatewayParser" and "DefaultGatewayRouter" classes. From Parser to Router, execute the corresponding parse and route methods. 
+"DefaultGatewayParser" and "DefaultGatewayRouter" classes also contain custom Parser and Router, which are executed in the order of priority.
+Finally, the service instance selected by the "DefaultGatewayRouter" is handed over to the upper layer for forwarding.
+Now, we take the job execution request forwarding with label information as an example, and draw the following flowchart:  
+![Gateway Request Routing](../../Images/Architecture/Gateway/gateway_server_dispatcher.png)
+
+
+#### 2、WebSocket Connection Forwarding Management
+By default, "Spring Cloud Gateway" only routes and forwards WebSocket request once, and cannot perform dynamic switching. 
+But under the Linkis's gateway architecture, each information interaction will be accompanied by a corresponding uri address to guide routing to different backend services.
+In addition to the "WebSocketService" which is responsible for connecting with the front-end and the client, 
+and the "WebSocketClient" which is responsible for connecting with the backend service, a series of "GatewayWebSocketSessionConnection" lists are cached in the middle.
+A "GatewayWebSocketSessionConnection" represents the connection between a session and multiple backend service instances.  
+![Gateway WebSocket Forwarding](../../Images/Architecture/Gateway/gatway_websocket.png)
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/README.md
new file mode 100644
index 0000000..9dc4f83
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/README.md
@@ -0,0 +1,32 @@
+## **Background**
+
+Microservice governance includes three main microservices: Gateway, Eureka and Open Feign.
+It is used to solve Linkis's service discovery and registration, unified gateway, request forwarding, inter-service communication, load balancing and other issues. 
+At the same time, Linkis 1.0 will also provide the supporting for Nacos; the entire Linkis is a complete microservice architecture and each business progress requires multiple microservices to complete.
+
+## **Architecture diagram**
+
+![](../../Images/Architecture/linkis-microservice-gov-01.png)
+
+## **Architecture Introduction**
+
+1. Linkis Gateway  
+As the gateway entrance of Linkis, Linkis Gateway is mainly responsible for request forwarding, user access authentication and WebSocket communication. 
+The Gateway of Linkis 1.0 also added Label-based routing and forwarding capabilities. 
+A WebSocket routing and forwarder is implemented by Spring Cloud Gateway in Linkis, it is used to establish a WebSocket connection with the client.
+After the connection is established, it will automatically analyze the client's WebSocket request and determine which backend microservice the request should be forward to through the rules, 
+then the request is forwarded to the corresponding backend microservice instance.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[Linkis Gateway](Gateway.md)
+
+2. Linkis Eureka  
+Mainly responsible for service registration and discovery. Eureka consists of multiple instances(service instances). These service instances can be divided into two types: Eureka Server and Eureka Client. 
+For ease of understanding, we divide Eureka Client into Service Provider and Service Consumer. Eureka Server provides service registration and discovery. 
+The Service Provider registers its own service with Eureka, so that service consumers can find it.
+The Service Consumer obtains a listed of registered services from Eureka, so that they can consume services.
+
+3. Linkis has implemented a set of its own underlying RPC communication schema based on Feign. As the underlying communication solution, Linkis RPC integrates the SDK into the microservices in need. 
+A microservice can be both the request caller and the request receiver.
+As the request caller, the Receiver of the target microservice will be requested through the Sender.
+As the request receiver, the Receiver will be provided to process the request sent by the Sender in order to complete the synchronous response or asynchronous response.
+   
+   ![](../../Images/Architecture/linkis-microservice-gov-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/BML.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/BML.md
new file mode 100644
index 0000000..69e671d
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/BML.md
@@ -0,0 +1,93 @@
+## Background
+
+BML (Material Library Service) is a material management system of linkis, which is mainly used to store various file data of users, including user scripts, resource files, third-party Jar packages, etc., and can also store class libraries that need to be used when the engine is running.
+
+It has the following functions:
+
+1) Support various types of files. Supports text and binary files. If you are a user in the field of big data, you can store their script files and material compression packages in the system.
+
+2), the service is stateless, multi-instance deployment, to achieve high service availability. When the system is deployed, it can be deployed with multiple instances. Each instance provides services independently to the outside world without interfering with each other. All information is stored in the database for sharing.
+
+3) Various ways of use. Provides two ways of Rest interface and SDK, users can choose according to their needs.
+
+4) The file is appended to avoid too many small HDFS files. Many small HDFS files will lead to a decrease in the overall performance of HDFS. We have adopted a file append method to combine multiple versions of resource files into one large file, effectively reducing the number of files in HDFS.
+
+5) Accurate authority control, safe storage of user resource file content. Resource files often have important content, and users only want to read it by themselves
+
+6) Provide life cycle management of file upload, update, download and other operational tasks.
+
+## Architecture diagram
+
+![BML Architecture Diagram](../../Images/Architecture/bml-02.png)
+
+## Schema description
+
+1. The Service layer includes resource management, uploading resources, downloading resources, sharing resources, and project resource management.
+
+Resource management is responsible for basic operations such as adding, deleting, modifying, and checking resources, controlling access rights, and whether files are out of date.
+
+2. File version control
+   Each BML resource file has version information. Each update operation of the same resource will generate a new version. Of course, it also supports historical version query and download operations. BML uses the version information table to record the deviation position and size of each version of the resource file HDFS storage, and can store multiple versions of data on one HDFS file.
+
+3. Resource file storage
+   HDFS files are mainly used as actual data storage. HDFS files can effectively ensure that the material library files are not lost. The files are appended to avoid too many small HDFS files.
+
+### Core Process
+
+**upload files:**
+
+1. Determine the operation type of the file uploaded by the user, whether it is the first upload or update upload. If it is the first upload, a new resource information record needs to be added. The system has generated a globally uniquely identified resource_id and a resource_location for this resource. The first version A1 of resource A needs to be stored in the resource_location location in the HDFS file system. After storing, you can get the first version marked as V00001. If it is a [...]
+
+2. Upload the file stream to the specified HDFS file. If it is an update, it will be added to the end of the last content by file appending.
+
+3. Add a new version record, each upload will generate a new version record. In addition to recording the metadata information of this version, the most important thing is to record the storage location of the version of the file, including the file path, start location, and end location.
+
+**download file:**
+
+1. When users download resources, they need to specify two parameters: one is resource_id and the other is version. If version is not specified, the latest version will be downloaded by default.
+
+2. After the user passes in the two parameters resource_id and version to the system, the system queries the resource_version table, finds the corresponding resource_location, start_byte and end\_byte to download, and uses the skipByte method of stream processing to set the front (start_byte- 1) skip bytes, then read to end_byte
+   The number of bytes. After the reading is successful, the stream information is returned to the user.
+
+3. Insert a successful download record in resource_download_history
+
+## Database Design
+
+1. Resource information table (resource)
+
+| Field name | Function | Remarks |
+|-------------------|------------------------------|----------------------------------|
+| resource_id | A string that uniquely identifies a resource globally | UUID can be used for identification |
+| resource_location | The location where resources are stored | For example, hdfs:///tmp/bdp/\${USERNAME}/ |
+| owner | The owner of the resource | e.g. zhangsan |
+| create_time | Record creation time | |
+| is_share | Whether to share | 0 means not to share, 1 means to share |
+| update\_time | Last update time of the resource | |
+| is\_expire | Whether the record resource expires | |
+| expire_time | Record resource expiration time | |
+
+2. Resource version information table (resource_version)
+
+| Field name | Function | Remarks |
+|-------------------|--------------------|----------|
+| resource_id | Uniquely identifies the resource | Joint primary key |
+| version | The version of the resource file | |
+| start_byte | Start byte count of resource file | |
+| end\_byte | End bytes of resource file | |
+| size | Resource file size | |
+| resource_location | Resource file placement location | |
+| start_time | Record upload start time | |
+| end\_time | End time of record upload | |
+| updater | Record update user | |
+
+3. Resource download history table (resource_download_history)
+
+| Field | Function | Remarks |
+|-------------|---------------------------|--------------------------------|
+| resource_id | Record the resource_id of the downloaded resource | |
+| version | Record the version of the downloaded resource | |
+| downloader | Record downloaded users | |
+| start\_time | Record download time | |
+| end\_time | Record end time | |
+| status | Whether the record is successful | 0 means success, 1 means failure |
+| err\_msg | Log failure reason | null means success, otherwise log failure reason |
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md
new file mode 100644
index 0000000..71d83d3
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md
@@ -0,0 +1,95 @@
+## **CSCache Architecture**
+### **issues that need resolving**
+
+### 1.1. Memory structure needs to be solved:
+
+1. Support splitting by ContextType: speed up storage and query performance
+
+2. Support splitting according to different ContextID: Need to complete ContextID, see metadata isolation
+
+3. Support LRU: Recycle according to specific algorithm
+
+4. Support searching by keywords: Support indexing by keywords
+
+5. Support indexing: support indexing directly through ContextKey
+
+6. Support traversal: need to support traversal according to ContextID and ContextType
+
+### 1.2 Loading and parsing problems to be solved:
+
+1. Support parsing ContextValue into memory data structure: It is necessary to complete the parsing of ContextKey and value to find the corresponding keywords.
+
+2. Need to interface with the Persistence module to complete the loading and analysis of the ContextID content
+
+### 1.3 Metric and cleaning mechanism need to solve the problem:
+
+1. When JVM memory is not enough, it can be cleaned based on memory usage and frequency of use
+
+2. Support statistics on the memory usage of each ContextID
+
+3. Support statistics on the frequency of use of each ContextID
+
+## **ContextCache Architecture**
+
+The architecture of ContextCache is shown in the following figure:
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png)
+
+1. ContextService: complete the provision of external interfaces, including additions, deletions, and changes;
+
+2. Cache: complete the storage of context information, map storage through ContextKey and ContextValue
+
+3. Index: The established keyword index, which stores the mapping between the keywords of the context information and the ContextKey;
+
+4. Parser: complete the keyword analysis of the context information;
+
+5. LoadModule completes the loading of information from the persistence layer when the ContextCache does not have the corresponding ContextID information;
+
+6. AutoClear: When the Jvm memory is insufficient, complete the on-demand cleaning of ContextCache;
+
+7. Listener: Metric information for the mobile phone ContextCache, such as memory usage and access times.
+
+## **ContextCache storage structure design**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png)
+
+The storage structure of ContextCache is divided into three layers:
+
+**ContextCache:** stores the mapping relationship between ContextID and ContextIDValue, and can complete the recovery of ContextID according to the LRU algorithm;
+
+**ContextIDValue:** CSKeyValueContext that has stored all context information and indexes of ContextID. And count the memory and usage records of ContestID.
+
+**CSKeyValueContext:** Contains the CSInvertedIndexSet index set that stores and supports keywords according to type, and also contains the storage set CSKeyValueMapSet that stores ContextKey and ContextValue.
+
+CSInvertedIndexSet: categorize and store keyword indexes through CSType
+
+CSKeyValueMapSet: categorize and store context information through CSType
+
+## **ContextCache UML Class Diagram Design**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png)
+
+## **ContextCache Timing Diagram**
+
+The following figure draws the overall process of using ContextID, KeyWord, and ContextType to check the corresponding ContextKeyValue in ContextCache.
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png)
+
+Note: The ContextIDValueGenerator will go to the persistence layer to pull the Array[ContextKeyValue] of the ContextID, and parse the ContextKeyValue key storage index and content through ContextKeyValueParser.
+
+The other interface processes provided by ContextCacheService are similar, so I won't repeat them here.
+
+## **KeyWord parsing logic**
+
+The specific entity bean of ContextValue needs to use the annotation \@keywordMethod on the corresponding get method that can be used as the keyword. For example, the getTableName method of Table must be annotated with \@keywordMethod.
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png)
+
+When ContextKeyValueParser parses ContextKeyValue, it scans all the annotations modified by KeywordMethod of the specific object passed in and calls the get method to obtain the returned object toString, which will be parsed through user-selectable rules and stored in the keyword collection. Rules have separators, and regular expressions
+
+Precautions:
+
+1. The annotation will be defined to the core module of cs
+
+2. The modified Get method cannot take parameters
+
+3. The toSting method of the return object of the Get method must return the keyword
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md
new file mode 100644
index 0000000..058f9ba
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md
@@ -0,0 +1,61 @@
+## **CSClient design ideas and implementation**
+
+
+CSClient is a client that interacts with each microservice and CSServer group. CSClient needs to meet the following functions.
+
+1. The ability of microservices to apply for a context object from cs-server
+
+2. The ability of microservices to register context information with cs-server
+
+3. The ability of microservices to update context information to cs-server
+
+4. The ability of microservices to obtain context information from cs-server
+
+5. Certain special microservices can sniff operations that have modified context information in cs-server
+
+6. CSClient can give clear instructions when the csserver cluster fails
+
+7. CSClient needs to provide a copy of all the context information of csid1 as a new csid2 for scheduling execution
+
+> The overall approach is to send http requests through the linkis-httpclient that comes with linkis, and send requests and receive responses by implementing various Action and Result entity classes.
+
+### 1. The ability to apply for context objects
+
+To apply for a context object, for example, if a user creates a new workflow on the front end, dss-server needs to apply for a context object from dss-server. When applying for a context object, the identification information (project name, workflow name) of the workflow needs to be passed through CSClient Send it to the CSServer (the gateway should be sent to one randomly at this time, because no csid information is carried at this time), once the application context returns the correct [...]
+
+### 2. Ability to register contextual information
+
+> The ability to register context, for example, the user uploads a resource file on the front-end page, uploads the file content to dss-server, dss-server stores the content in bml, and then needs to register the resourceid and version obtained from bml to cs-server In this case, you need to use the ability of csclient to register. The ability to register is to pass in csid and cskey
+> Register with csvalue (resourceid and version).
+
+### 3. Ability to update registered context
+
+> The ability to update contextual information. For example, if a user uploads a resource file test.jar, csserver already has registered information. If the user updates the resource file when editing the workflow, then cs-server needs to update this content Update. At this time, you need to call the updated interface of csclient
+
+### 4. The ability to get context
+
+The context information registered to csserver needs to be read when variable replacement, resource file download, and downstream nodes call upstream nodes to generate information. For example, when the engine side executes code, it needs to download bml resources. When you need to interact with csclient and csserver, get the resourceid and version of the file in bml and then download it.
+
+### 5. Certain special microservices can sniff operations that have modified context information in cs-server
+
+This operation is based on the following example. For example, a widget node has a strong linkage with the upstream sql node. The user writes a sql in the sql node, and the metadata of the sql result set is a, b, and c. Field, the widget node behind is bound to this sql, you can edit these three fields on the page, and then the user changes the sql statement, the metadata becomes a, b, c, d four fields, this When the user needs to refresh manually. We hope that if the script is changed,  [...]
+
+### 6. CSClient needs to provide a copy of all context information of csid1 as a new csid2 for scheduling execution
+
+Once the user publishes a project, he hopes to tag all the information of the project similar to git. The resource files and custom variables here will not change anymore, but there are some dynamic information, such as the result set generated. The content of csid will still be updated. So csclient needs to provide an interface for csid1 to copy all context information for microservices to call
+
+## **Implementation of ClientListener Module**
+
+For a client, sometimes you want to know that a certain csid and cskey have changed in the cs-server as soon as possible. For example, the csclient of visualis needs to be able to know that the previous sql node has changed, then it needs to be notified , The server has a listener module, and the client also needs a listener module. For example, a client wants to be able to monitor the changes of a certain cskey of a certain csid, then he needs to register the cskey to the callbackEngine [...]
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png)
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png)
+
+## **Implementation of GatewayRouter**
+
+
+The Gateway plug-in implements Context forwarding. The forwarding logic of the Gateway plug-in is carried out through the GatewayRouter. It needs to be divided into two ways. The first is to apply for a context object. At this time, the information carried by the CSClient does not contain csid. For the information, the judgment logic at this time should be through the registration information of eureka, and the first request sent will randomly enter a microservice instance.
+The second case is that the content of the ContextID is carried. We need to parse the csid. The way of parsing is to obtain the information of each instance through the method of string cutting, and then use eureka to determine whether this micro-channel still exists through the instance information. Service, if it exists, send it to this microservice instance
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md
new file mode 100644
index 0000000..76c85c3
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md
@@ -0,0 +1,86 @@
+## **CS HA Architecture Design**
+
+### 1, CS HA architecture summary
+
+#### (1) CS HA architecture diagram
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png)
+
+#### (2) Problems to be solved
+
+-HA of Context instance object
+
+-Client generates CSID request when creating workflow
+
+-List of aliases of CS Server
+
+-Unified CSID generation and analysis rules
+
+#### (3) Main design ideas
+
+① Load balancing
+
+When the client creates a new workflow, it randomly requests the HA module of a certain server to generate a new HAID with equal probability. The HAID information includes the main server information (hereinafter referred to as the main instance), and the candidate instance, where the candidate instance is The instance with the lowest load among the remaining servers, and a corresponding ContextID. The generated HAID is bound to the workflow and is persisted to the database, and then all [...]
+
+②High availability
+
+In subsequent operations, when the client or gateway determines that the main instance is unavailable, the operation request is forwarded to the standby instance for processing, thereby achieving high service availability. The HA module of the standby instance will first verify the validity of the request based on the HAID information.
+
+③Alias ​​mechanism
+
+The alias mechanism is adopted for the machine, the Instance information contained in the HAID adopts a custom alias, and the alias mapping queue is maintained in the background. It is that the client uses HAID when interacting with other components in the background, and uses ContextID when interacting with other components in the background. When implementing specific operations, a dynamic proxy mechanism is used to convert HAID to ContextID for processing.
+
+### 2, module design
+
+#### (1) Module diagram
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png)
+
+#### (2) Specific modules
+
+①ContextHAManager module
+
+Provide interface for CS Server to call to generate CSID and HAID, and provide alias conversion interface based on dynamic proxy;
+
+Call the persistence module interface to persist CSID information;
+
+②AbstractContextHAManager module
+
+The abstraction of ContextHAManager can support the realization of multiple ContextHAManager;
+
+③InstanceAliasManager module
+
+RPC module provides Instance and alias conversion interface, maintains alias mapping queue, and provides alias and CS
+Server instance query; provide an interface to verify whether the host is valid;
+
+④HAContextIDGenerator module
+
+Generate a new HAID and encapsulate it into the client's agreed format and return it to the client. The HAID structure is as follows:
+
+\${length of first instance}\${length of second instance}{instance alias 1} {instance alias 2} {actual ID}, the actual ID is set as ContextID
+Key;
+
+⑤ContextHAChecker module
+
+Provide HAID verification interface. Each request received will verify whether the ID format is valid, and whether the current host is the primary instance or the secondary instance: if it is the primary instance, the verification is passed; if it is the secondary instance, verify whether the primary instance is invalid and the primary instance is invalid The verification is passed.
+
+⑥BackupInstanceGenerator module
+
+Generate a backup instance and attach it to the CSID information;
+
+⑦MultiTenantBackupInstanceGenerator interface
+
+(Reserved interface, not implemented yet)
+
+### 3. UML Class Diagram
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png)
+
+### 4. HA module operation sequence diagram
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png)
+
+CSID generated for the first time:
+The client sends a request, and the Gateway forwards it to any server. The HA module generates the HAID, including the main instance, the backup instance and the CSID, and completes the binding of the workflow and the HAID.
+
+When the client sends a change request, Gateway determines that the main Instance is invalid, and then forwards the request to the standby Instance for processing. After the instance on the standby Instance verifies that the HAID is valid, it loads the instance and processes the request.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md
new file mode 100644
index 0000000..933d384
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md
@@ -0,0 +1,33 @@
+## **Listener Architecture**
+
+In DSS, when a node changes its metadata information, the context information of the entire workflow changes. We expect all nodes to perceive the change and automatically update the metadata. We use the monitoring mode to achieve, and use the heartbeat mechanism to poll to maintain the metadata consistency of the context information.
+
+### **Client registration itself, CSKey registration and CSKey update process**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png)
+
+The main process is as follows:
+
+1. Registration operation: The clients client1, client2, client3, and client4 register themselves and the CSKey they want to monitor with the csserver through HTPP requests. The Service service obtains the callback engine instance through the external interface, and registers the client and its corresponding CSKeys.
+
+2. Update operation: If the ClientX node updates the CSKey content, the Service service updates the CSKey cached by the ContextCache, and the ContextCache delivers the update operation to the ListenerBus. The ListenerBus notifies the specific listener to consume (that is, the ContextKeyCallbackEngine updates the CSKeys corresponding to the Client). The consumed event will be automatically removed.
+
+3. Heartbeat mechanism:
+
+All clients use heartbeat information to detect whether the value of CSKeys in ContextKeyCallbackEngine has changed.
+
+ContextKeyCallbackEngine returns the updated CSKeys value to all registered clients through the heartbeat mechanism. If there is a client's heartbeat timeout, remove the client.
+
+### **Listener UM class diagram**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png)
+
+Interface: ListenerManager
+
+External: Provide ListenerBus for event delivery.
+
+Internally: provide a callback engine for specific event registration, access, update, and heartbeat processing logic
+
+## **Listener callbackengine timing diagram**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md
new file mode 100644
index 0000000..b57c8c7
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md
@@ -0,0 +1,8 @@
+## **CSPersistence Architecture**
+
+### Persistence UML diagram
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png)
+
+
+The Persistence module mainly defines ContextService persistence related operations. The entities mainly include CSID, ContextKeyValue, CSResource, and CSTable.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md
new file mode 100644
index 0000000..8dea6f2
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md
@@ -0,0 +1,127 @@
+## **CSSearch Architecture**
+### **Overall architecture**
+
+As shown below:
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png)
+
+1. ContextSearch: The query entry, accepts the query conditions defined in the Map form, and returns the corresponding results according to the conditions.
+
+2. Building module: Each condition type corresponds to a Parser, which is responsible for converting the condition in the form of Map into a Condition object, which is implemented by calling the logic of ConditionBuilder. Conditions with complex logical relationships will use ConditionOptimizer to optimize query plans based on cost-based algorithms.
+
+3. Execution module: Filter out the results that match the conditions from the Cache. According to different query targets, there are three execution modes: Ruler, Fetcher and Match. The specific logic is described later.
+
+4. Evaluation module: Responsible for calculation of conditional execution cost and statistics of historical execution status.
+
+### **Query Condition Definition (ContextSearchCondition)**
+
+A query condition specifies how to filter out the part that meets the condition from a ContextKeyValue collection. The query conditions can be used to form more complex query conditions through logical operations.
+
+1. Support ContextType, ContextScope, KeyWord matching
+
+    1. Corresponding to a Condition type
+
+    2. In Cache, these should have corresponding indexes
+
+2. Support contains/regex matching mode for key
+
+    1. ContainsContextSearchCondition: contains a string
+
+    2. RegexContextSearchCondition: match a regular expression
+
+3. Support logical operations of or, and and not
+
+    1. Unary operation UnaryContextSearchCondition:
+
+> Support logical operations of a single parameter, such as NotContextSearchCondition
+
+1. Binary operation BinaryContextSearchCondition:
+
+> Support the logical operation of two parameters, defined as LeftCondition and RightCondition, such as OrContextSearchCondition and AndContextSearchCondition
+
+1. Each logical operation corresponds to an implementation class of the above subclass
+
+2. The UML class diagram of this part is as follows:
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png)
+
+### **Construction of query conditions**
+
+1. Support construction through ContextSearchConditionBuilder: When constructing, if multiple ContextType, ContextScope, KeyWord, contains/regex matches are declared at the same time, they will be automatically connected by And logical operation
+
+2. Support logical operations between Conditions and return new Conditions: And, Or and Not (considering the form of condition1.or(condition2), the top-level interface of Condition is required to define logical operation methods)
+
+3. Support to build from Map through ContextSearchParser corresponding to each underlying implementation class
+
+### **Execution of query conditions**
+
+1. Three function modes of query conditions:
+
+    1. Ruler: Filter out eligible ContextKeyValue sub-Arrays from an Array
+
+    2. Matcher: Determine whether a single ContextKeyValue meets the conditions
+
+    3. Fetcher: Filter out an Array of eligible ContextKeyValue from ContextCache
+
+2. Each bottom-level Condition has a corresponding Execution, responsible for maintaining the corresponding Ruler, Matcher, and Fetcher.
+
+### **Query entry ContextSearch**
+
+Provide a search interface, receive Map as a parameter, and filter out the corresponding data from the Cache.
+
+1. Use Parser to convert the condition in the form of Map into a Condition object
+
+2. Obtain cost information through Optimizer, and determine the order of query according to the cost information
+
+3. After executing the corresponding Ruler/Fetcher/Matcher logic through the corresponding Execution, the search result is obtained
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png)
+
+### **Query Optimization**
+
+1. OptimizedContextSearchCondition maintains the Cost and Statistics information of the condition:
+
+    1. Cost information: CostCalculator is responsible for judging whether a certain Condition can calculate Cost, and if it can be calculated, it returns the corresponding Cost object
+
+    2. Statistics information: start/end/execution time, number of input lines, number of output lines
+
+2. Implement a CostContextSearchOptimizer, whose optimize method is based on the cost of the Condition to optimize the Condition and convert it into an OptimizedContextSearchCondition object. The specific logic is described as follows:
+
+    1. Disassemble a complex Condition into a tree structure based on the combination of logical operations. Each leaf node is a basic simple Condition; each non-leaf node is a logical operation.
+
+> Tree A as shown in the figure below is a complex condition composed of five simple conditions of ABCDE through various logical operations.
+
+![](./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png)
+<center>(Tree A)</center>
+
+1. The execution of these Conditions is actually depth first, traversing the tree from left to right. Moreover, according to the exchange rules of logical operations, the left and right order of the child nodes of a node in the Condition tree can be exchanged, so all possible trees in all possible execution orders can be enumerated.
+
+> Tree B as shown in the figure below is another possible sequence of tree A above, which is exactly the same as the execution result of tree A, except that the execution order of each part has been adjusted.
+
+![](./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png)
+<center>(Tree B)</center>
+
+1. For each tree, the cost is calculated from the leaf node and collected to the root node, which is the final cost of the tree, and finally the tree with the smallest cost is obtained as the optimal execution order.
+
+> The rules for calculating node cost are as follows:
+
+1. For leaf nodes, each node has two attributes: Cost and Weight. Cost is the cost calculated by CostCalculator. Weight is assigned according to the order of execution of the nodes. The current default is 1 on the left and 0.5 on the right. See how to adjust it later (the reason for assigning weight is that the conditions on the left have already been set in some cases. It can directly determine whether the entire combinatorial logic matches or not, so the condition on the right does not [...]
+
+2. For non-leaf nodes, Cost = the sum of Cost×Weight of all child nodes; the weight assignment logic is consistent with that of leaf nodes.
+
+> Taking tree A and tree B as examples, calculate the costs of these two trees respectively, as shown in the figure below, the number in the node is Cost\|Weight, assuming that the cost of the 5 simple conditions of ABCDE is 10, 100, 50 , 10, and 100. It can be concluded that the cost of tree B is less than that of tree A, which is a better solution.
+
+
+<center class="half">
+    <img src="./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png" width="300"> <img src="./../ ../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png" width="300">
+</center>
+
+1. Use CostCalculator to measure the cost of simple conditions:
+
+    1. The condition acting on the index: the cost is determined according to the distribution of the index value. For example, when the length of the Array obtained by condition A from the Cache is 100 and condition B is 200, then the cost of condition A is less than B.
+
+    2. Conditions that need to be traversed:
+
+        1. According to the matching mode of the condition itself, an initial Cost is given: For example, Regex is 100, Contains is 10, etc. (the specific values ​​etc. will be adjusted according to the situation when they are realized)
+
+        2. According to the efficiency of historical query, the real-time Cost is obtained after continuous adjustment on the basis of the initial Cost. Throughput per unit time
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md
new file mode 100644
index 0000000..05c6168
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md
@@ -0,0 +1,53 @@
+## **ContextService Architecture**
+
+### **Horizontal Division**
+
+Horizontally divided into three modules: Restful, Scheduler, Service
+
+#### Restful Responsibilities:
+
+    Encapsulate the request as httpjob and submit it to the Scheduler
+
+#### Scheduler Responsibilities:
+
+    Find the corresponding service through the ServiceName of the httpjob protocol to execute the job
+
+#### Service Responsibilities:
+
+    The module that actually executes the request logic, encapsulates the ResponseProtocol, and wakes up the wait thread in Restful
+
+### **Vertical Division**
+Vertically divided into 4 modules: Listener, History, ContextId, Context:
+
+#### Listener responsibilities:
+
+1. Responsible for the registration and binding of the client side (write to the database and register in the CallbackEngine)
+
+2. Heartbeat interface, return Array[ListenerCallback] through CallbackEngine
+
+#### History Responsibilities:
+Create and remove history, operate Persistence for DB persistence
+
+#### ContextId Responsibilities:
+Mainly docking with Persistence for ContextId creation, update and removal, etc.
+
+#### Context responsibility:
+
+1. For removal, reset and other methods, first operate Persistence for DB persistence, and update ContextCache
+
+2. Encapsulate the query condition and go to the ContextSearch module to obtain the corresponding ContextKeyValue data
+
+The steps for requesting access are as follows:
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png)
+
+## **UML Class Diagram**
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png)
+
+## **Scheduler thread model**
+
+Need to ensure that Restful's thread pool is not filled
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png)
+
+The sequence diagram is as follows:
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md
new file mode 100644
index 0000000..c6af94c
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md
@@ -0,0 +1,123 @@
+## **Background**
+
+### **What is Context**
+
+All necessary information to keep a certain operation going on. For example: reading three books at the same time, the page number of each book has been turned is the context of continuing to read the book.
+
+### **Why do you need CS (Context Service)?**
+
+CS is used to solve the problem of data and information sharing across multiple systems in a data application development process.
+
+For example, system B needs to use a piece of data generated by system A. The usual practice is as follows:
+
+1. B system calls the data access interface developed by A system;
+
+2. System B reads the data written by system A into a shared storage.
+
+With CS, the A and B systems only need to interact with the CS, write the data and information that need to be shared into the CS, and read the data and information that need to be read from the CS, without the need for an external system to develop and adapt. , Which greatly reduces the call complexity and coupling of information sharing between systems, and makes the boundaries of each system clearer.
+
+## **Product Range**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png)
+
+
+### Metadata context
+
+The metadata context defines the metadata specification.
+
+Metadata context relies on data middleware, and its main functions are as follows:
+
+1. Open up the relationship with the data middleware, and get all user metadata information (including Hive table metadata, online database table metadata, and other NOSQL metadata such as HBase, Kafka, etc.)
+
+2. When all nodes need to access metadata, including existing metadata and metadata in the application template, they must go through the metadata context. The metadata context records all metadata information used by the application template.
+
+3. The new metadata generated by each node must be registered with the metadata context.
+
+4. When the application template is extracted, the metadata context is abstracted for the application template (mainly, the multiple library tables used are made into \${db}. tables to avoid data permission problems) and all dependent metadata information is packaged.
+
+Metadata context is the basis of interactive workflows and the basis of application templates. Imagine: When Widget is defined, how to know the dimensions of each indicator defined by DataWrangler? How does Qualitis verify the graph report generated by Widget?
+
+### Data context
+
+The data context defines the data specification.
+
+The data context depends on data middleware and Linkis computing middleware. The main functions are as follows:
+
+1. Get through the data middleware and get all user data information.
+
+2. Get through the computing middleware and get the data storage information of all nodes.
+
+3. When all nodes need to write temporary results, they must pass through the data context and be uniformly allocated by the data context.
+
+4. When all nodes need to access data, they must pass the data context.
+
+5. The data context distinguishes between dependent data and generated data. When the application template is extracted, all dependent data is abstracted and packaged for the application template.
+
+### Resource context
+
+The resource context defines the resource specification.
+
+The resource context mainly interacts with Linkis computing middleware. The main functions are as follows:
+
+1. User resource files (such as Jar, Zip files, properties files, etc.)
+
+2. User UDF
+
+3. User algorithm package
+
+4. User script
+
+### Environmental context
+
+The environmental context defines the environmental specification.
+
+The main functions are as follows:
+
+1. Operating System
+
+2. Software, such as Hadoop, Spark, etc.
+
+3. Package dependencies, such as Mysql-JDBC.
+
+### Object context
+
+The runtime context is all the context information retained when the application template (workflow) is defined and executed.
+
+It is used to assist in defining the workflow/application template, prompting and perfecting all necessary information when the workflow/application template is executed.
+
+The runtime workflow is mainly used by Linkis.
+
+
+## **CS Architecture Diagram**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png)
+
+## **Architecture Description:**
+
+### 1. Client
+The entrance of external access to CS, Client module provides HA function;
+[Enter Client Architecture Design] (ContextService_Client.md)
+
+### 2. Service Module
+Provide a Restful interface to encapsulate and process CS requests submitted by the client;
+[Enter Service Architecture Design] (ContextService_Service.md)
+
+### 3. ContextSearch
+The context query module provides rich and powerful query capabilities for the client to find the key-value key-value pairs of the context;
+[Enter ContextSearch architecture design](ContextService_Search.md)
+
+### 4. Listener
+The CS listener module provides synchronous and asynchronous event consumption capabilities, and has the ability to notify the Client in real time once the Zookeeper-like Key-Value is updated;
+[Enter Listener architecture design](ContextService_Listener.md)
+
+### 5. ContextCache
+The context memory cache module provides the ability to quickly retrieve the context and the ability to monitor and clean up JVM memory usage;
+[Enter ContextCache architecture design] (ContextService_Cache.md)
+
+### 6. HighAvailable
+Provide CS high availability capability;
+[Enter HighAvailable architecture design](ContextService_HighAvailable.md)
+
+### 7. Persistence
+The persistence function of CS;
+[Enter Persistence architecture design](ContextService_Persistence.md)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/PublicService.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/PublicService.md
new file mode 100644
index 0000000..6224be1
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/PublicService.md
@@ -0,0 +1,34 @@
+
+## **Background**
+
+PublicService is a comprehensive service composed of multiple sub-modules such as "configuration", "jobhistory", "udf", "variable", etc. Linkis 
+1.0 added label management based on version 0.9. Linkis doesn't need to set the parameters every time during the execution of different jobs.
+Many variables, functions and configurations can be reused after the user completes the settings once, and of course that they can also be shared with other users.
+
+## **Architecture diagram**
+
+![Diagram](../../Images/Architecture/linkis-publicService-01.png)
+
+## **Architecture Introduction**
+
+1. linkis-configuration:Provides query and save operations for global settings and general settings, especially engine configuration parameters.
+
+2. linkis-jobhistory:Specially used for storage and query of historical execution task, users can obtain the historical tasks through the interface provided by "jobhistory", include logs, status and execution content.
+At the same time, the historical task also support the paging query operation.The administrator can view all the historical tasks, but the ordinary users can only view their own tasks.
+
+3. Linkis-udf:Provides the user function management capability in Linkis, which can be divided into shared functions, personal functions, system functions and the functions used by engine.
+Once the user selects one, it will be automatically loaded for users to directly quote in the code and reuse between different scripts when the engine starting. 
+
+4. Linkis-variable:Provides the global variable management capability in Linkis, store and query the user-defined global variables。
+
+5. linkis-instance-label:Provides two modules named "label server" and "label client" for labeling Engine and EM. It also provides node-based label addition, deletion, modification and query capabilities.
+The main functions are as follows:
+
+-   Provides resource management capabilities for some specific labels to assist RM in more refined resource management.
+
+-   Provides labeling capabilities for users. The user label will be automatically added for judgment when applying for the engine. 
+
+-   Provides the label analysis module, which can parse the users' request into a bunch of labels。
+
+-   With the ability of node label management, it is mainly used to provide the label  CRUD capability of the node and the label resource management to manage the resources of certain labels, marking the maximum resource, minimum resource and used resource of a Label.
+
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/README.md
new file mode 100644
index 0000000..c9ddf68
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/README.md
@@ -0,0 +1,91 @@
+PublicEnhencementService (PS) architecture design
+=====================================
+
+PublicEnhancementService (PS): Public enhancement service, a module that provides functions such as unified configuration management, context service, physical library, data source management, microservice management, and historical task query for other microservice modules.
+
+![](../../Images/Architecture/PublicEnhencementArchitecture.png)
+
+Introduction to the second-level module:
+==============
+
+BML material library
+---------
+
+It is the linkis material management system, which is mainly used to store various file data of users, including user scripts, resource files, third-party Jar packages, etc., and can also store class libraries that need to be used when the engine runs.
+
+| Core Class | Core Function |
+|-----------------|------------------------------------|
+| UploadService | Provide resource upload service |
+| DownloadService | Provide resource download service |
+| ResourceManager | Provides a unified management entry for uploading and downloading resources |
+| VersionManager | Provides resource version marking and version management functions |
+| ProjectManager | Provides project-level resource management and control capabilities |
+
+Unified configuration management
+-------------------------
+
+Configuration provides a "user-engine-application" three-level configuration management solution, which provides users with the function of configuring custom engine parameters under various access applications.
+
+| Core Class | Core Function |
+|----------------------|--------------------------------|
+| CategoryService | Provides management services for application and engine catalogs |
+| ConfigurationService | Provides a unified management service for user configuration |
+
+ContextService context service
+------------------------
+
+ContextService is used to solve the problem of data and information sharing across multiple systems in a data application development process.
+
+| Core Class | Core Function |
+|---------------------|------------------------------------------|
+| ContextCacheService | Provides a cache service for context information |
+| ContextClient | Provides the ability for other microservices to interact with the CSServer group |
+| ContextHAManager | Provide high-availability capabilities for ContextService |
+| ListenerManager | The ability to provide a message bus |
+| ContextSearch | Provides query entry |
+| ContextService | Implements the overall execution logic of the context service |
+
+Datasource data source management
+--------------------
+
+Datasource provides the ability to connect to different data sources for other microservices.
+
+| Core Class | Core Function |
+|-------------------|--------------------------|
+| datasource-server | Provide the ability to connect to different data sources |
+
+InstanceLabel microservice management
+-----------------------
+
+InstanceLabel provides registration and labeling functions for other microservices connected to linkis.
+
+| Core Class | Core Function |
+|-----------------|--------------------------------|
+| InsLabelService | Provides microservice registration and label management functions |
+
+Jobhistory historical task management
+----------------------
+
+Jobhistory provides users with linkis historical task query, progress, log display related functions, and provides a unified historical task view for administrators.
+
+| Core Class | Core Function |
+|------------------------|----------------------|
+| JobHistoryQueryService | Provide historical task query service |
+
+Variable user-defined variable management
+--------------------------
+
+Variable provides users with functions related to the storage and use of custom variables.
+
+| Core Class | Core Function |
+|-----------------|-------------------------------------|
+| VariableService | Provides functions related to the storage and use of custom variables |
+
+UDF user-defined function management
+---------------------
+
+UDF provides users with the function of custom functions, which can be introduced by users when writing code.
+
+| Core Class | Core Function |
+|------------|------------------------|
+| UDFService | Provide user-defined function service |
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/README.md
new file mode 100644
index 0000000..7f5acde
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/README.md
@@ -0,0 +1,18 @@
+## 1. Document Structure
+
+Linkis 1.0 divides all microservices into three categories: public enhancement services, computing governance services, and microservice governance services. The following figure shows the architecture of Linkis 1.0.
+
+![Linkis1.0 Architecture Figure](./../Images/Architecture/Linkis1.0-architecture.png)
+
+The specific responsibilities of each category are as follows:
+
+1. Public enhancement services are the material library services, context services, data source services and public services that Linkis 0.X has provided.
+2. The microservice governance services are Spring Cloud Gateway, Eureka and Open Feign already provided by Linkis 0.X, and Linkis 1.0 will also provide support for Nacos
+3. Computing governance services are the core focus of Linkis 1.0, from submission, preparation to execution, overall three stages to comprehensively upgrade Linkis's ability to perform control over user tasks.
+
+The following is a directory listing of Linkis1.0 architecture documents:
+
+1. The characteristics of Linkis1.0's architecture , please read [The difference between Linkis1.0 and Linkis0.x](DifferenceBetween1.0&0.x.md).
+2. Linkis1.0 public enhancement service related documents, please read [Public Enhancement Service](Public_Enhancement_Services/README.md).
+3. Linkis1.0 microservice governance related documents, please read [Microservice Governance](Microservice_Governance_Services/README.md).
+4. Linkis1.0 computing governance service related documents, please read [Computation Governance Service](Computation_Governance_Services/README.md).
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Deployment_Documents/Cluster_Deployment.md b/Linkis-Doc-master/en_US/Deployment_Documents/Cluster_Deployment.md
new file mode 100644
index 0000000..57f3118
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Deployment_Documents/Cluster_Deployment.md
@@ -0,0 +1,98 @@
+Introduction to Distributed Deployment Scheme
+==================
+
+Linkis's stand-alone deployment is simple, but it cannot be used in a production environment, because too many processes on the same server will make the server too stressful. The choice of deployment plan is related to the company's user scale, user habits, and the number of simultaneous users of the cluster. Generally speaking, we will choose the deployment method based on the number of simultaneous users using Linkis and the user's preference for the execution engine.
+
+1.Multi-node deployment method reference
+------------------------------------------
+
+Linkis1.0 still maintains the SpringCloud-based microservice architecture, in which each microservice supports multiple active deployment schemes. Of course, different microservices play different roles in the system. Some microservices are called frequently, and more It may be in a high load situation. **On the machine where EngineConnManager is installed, the memory load of the machine will be relatively high because the user's engine process will be started, and the load of other type [...]
+
+EngineConnManager Total resources used = total memory + total number of cores =
+Number of people online at the same time \* (All types of engines occupy memory) \*maximum concurrency per user + number of people online at the same time \*
+(total memory occupied by all types of engine conns) \*maximum concurrency per user
+
+For example, when only spark, hive, and python engines are used and the maximum concurrency of a single user is 1, 50 people are used at the same time, Spark's driver memory is 1G, and Hive
+Client memory 1G, python client 1G, each engine uses 1 core, then it is 50 \*(1+1+1)G \*
+1 + 50 \*(1+1+1) cores\*1 = 150G memory + 150 CPU cores.
+
+During distributed deployment, the memory occupied by the microservice itself can be calculated according to each 2G memory. In the case of a large number of users, it is recommended to increase the memory of ps-publicservice to 6G, and it is recommended to reserve 10G of memory as a buffer.
+The following configuration assumes that **each user starts two engines at the same time as an example**. **For a machine with 64G memory**, the reference configuration is as follows:
+
+- 10-50 people online at the same time
+
+> **Server configuration recommended** 4 servers, named S1, S2, S3, S4
+
+| Service | Host name | Remark |
+|---------------|-----------|------------------|
+| cg-engineconnmanager | S1, S2 | Each machine is deployed separately |
+| Other services | S3, S4 | Eureka high availability deployment |
+
+- 50-100 people online at the same time
+
+> **Server configuration recommendation**: 6 servers, named S1, S2, S3, S4, S5, S6
+
+| Service | Host name | Remark |
+|----------------------|-----------|------------------|
+| cg-engineconnmanager | S1-S4 | Each machine is deployed separately |
+| Other services | S5, S6 | Eureka high availability deployment |
+
+- The number of simultaneous users 100-300
+
+**Recommended server configuration**: 12 servers, named S1, S2...S12
+
+| Service | Host name | Remark |
+|----------------------|-----------|------------------|
+| cg-engineconnmanager | S1-S10 | Each machine is deployed separately |
+| Other services | S11, S12 | Eureka high availability deployment |
+
+- 300-500 people at the same time
+
+> **Server configuration recommendation**: 20 servers, named S1, S2...S20
+
+| Service | Host name | Remark |
+|----------------------|-----------|-----------------|
+| cg-engineconnmanager | S1-S18 | Each machine is deployed separately |
+| Other services | S19, S20 | Eureka high-availability deployment, some microservices can be expanded if the request volume is tens of thousands, and the current active-active deployment can support thousands of users in the industry |
+
+- More than 500 users at the same time (estimated based on 800 people online at the same time)
+
+> **Server configuration recommendation**: 34 servers, named S1, S2...S34
+
+| Service | Host name | Remark |
+|----------------------|-----------|------------------------------|
+| cg-engineconnmanager | S1-S32 | Each machine is deployed separately |
+| Other services | S33, S34 | Eureka high-availability deployment, some microservices can be expanded if the request volume is tens of thousands, and the current active-active deployment can support thousands of users in the industry |
+
+2.Linkis microservices distributed deployment configuration parameters
+---------------------------------
+
+In linkis1.0, we have optimized and integrated the startup parameters. Some important startup parameters of each microservice are loaded through the conf/linkis-env.sh file, such as the microservice IP, port, registry address, etc. The way to modify the parameters has changed a little. Take the active-active deployment of the machines **server1 and server2** as an example, in order to allow eureka to register with each other.
+
+On the server1 machine, you need to change the value in **conf/linkis-env.sh**
+
+``
+EUREKA_URL=http://$EUREKA_INSTALL_IP:$EUREKA_PORT/eureka/
+``
+
+change into:
+
+``
+EUREKA_URL=http://$EUREKA_INSTALL_IP:$EUREKA_PORT/eureka/,http:/server2:port/eureka/
+``
+
+In the same way, on the server2 machine, you need to change the value in **conf/linkis-env.sh**
+
+``
+EUREKA_URL=http://$EUREKA_INSTALL_IP:$EUREKA_PORT/eureka/
+``
+
+change into:
+
+``
+EUREKA_URL=http://$EUREKA_INSTALL_IP:$EUREKA_PORT/eureka/,http:/server1:port/eureka/
+``
+
+After the modification, start the microservice, enter the eureka registration interface from the web side, you can see that the microservice has been successfully registered to eureka, and the DS
+Replicas will also display the replica nodes adjacent to the cluster.
+
diff --git a/Linkis-Doc-master/en_US/Deployment_Documents/EngineConnPlugin_installation_document.md b/Linkis-Doc-master/en_US/Deployment_Documents/EngineConnPlugin_installation_document.md
new file mode 100644
index 0000000..990f55b
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Deployment_Documents/EngineConnPlugin_installation_document.md
@@ -0,0 +1,82 @@
+EngineConnPlugin installation document
+===============================
+
+This article mainly introduces the use of Linkis EngineConnPlugins, mainly from the aspects of compilation and installation.
+
+## 1. Compilation and packaging of EngineConnPlugins
+
+After linkis1.0, the engine is managed by EngineConnManager, and the EngineConnPlugin (ECP) supports real-time effectiveness.
+In order to facilitate the EngineConnManager to be loaded into the corresponding EngineConnPlugin by labels, it needs to be packaged according to the following directory structure (take hive as an example):
+```
+hive: engine home directory, must be the name of the engine
+└── dist # Dependency and configuration required for engine startup, different versions of the engine need to be in this directory to prevent the corresponding version directory
+    └── v1.2.1 #Must start with ‘v’ and add engine version number ‘1.2.1’
+        └── conf # Configuration file directory required by the engine
+        └── lib # Dependency package required by EngineConnPlugin
+└── plugin #EngineConnPlugin directory, this directory is used for engine management service package engine startup command and resource application
+    └── 1.2.1 # Engine version
+        └── linkis-engineplugin-hive-1.0.0-RC1.jar #Engine module package (only need to place a separate engine package)
+```
+If you are adding a new engine, you can refer to hive's assembly configuration method, source code directory: linkis-engineconn-plugins/engineconn-plugins/hive/src/main/assembly/distribution.xml
+## 2. Engine Installation
+### 2.1 Plugin package installation
+1.First, confirm the dist directory of the engine: wds.linkis.engineconn.home (get the value of this parameter from ${LINKIS_HOME}/conf/linkis.properties), this parameter is used by EngineConnPluginServer to read the configuration file that the engine depends on And third-party Jar packages. If the parameter (wds.linkis.engineconn.dist.load.enable=true) is set, the engine in this directory will be automatically read and loaded into the Linkis BML (material library).
+
+2.Second, confirm the engine Jar package directory:
+wds.linkis.engineconn.plugin.loader.store.path, which is used by EngineConnPluginServer to read the actual implementation Jar of the engine.
+
+It is highly recommended to specify **wds.linkis.engineconn.home and wds.linkis.engineconn.plugin.loader.store.path as** the same directory, so that you can directly unzip the engine ZIP package exported by maven into this directory, such as: Place it in the ${LINKIS_HOME}/lib/linkis-engineconn-plugins directory.
+
+```
+${LINKIS_HOME}/lib/linkis-engineconn-plugins:
+└── hive
+    └── dist
+    └── plugin
+└── spark
+    └── dist
+    └── plugin
+```
+
+If the two parameters do not point to the same directory, you need to place the dist and plugin directories separately, as shown in the following example:
+
+```
+## dist directory
+${LINKIS_HOME}/lib/linkis-engineconn-plugins/dist:
+└── hive
+    └── dist
+└── spark
+    └── dist
+## plugin directory
+${LINKIS_HOME}/lib/linkis-engineconn-plugins/plugin:
+└── hive
+    └── plugin
+└── spark
+    └── plugin
+```
+### 2.2 Configuration modification of management console (optional)
+
+The configuration of the Linkis1.0 management console is managed according to the engine label. If the new engine has configuration parameters, you need to insert the corresponding configuration parameters in the Configuration, and you need to insert the parameters in three tables:
+
+```
+linkis_configuration_config_key: Insert the key and default values of the configuration parameters of the engin
+linkis_manager_label: Insert engine label such as hive-1.2.1
+linkis_configuration_category: Insert the catalog relationship of the engine
+linkis_configuration_config_value: Insert the configuration that the engine needs to display
+```
+
+If it is an existing engine and a new version is added, you can modify the version of the corresponding engine in the linkis_configuration_dml.sql file for execution
+
+### 2.3 Engine refresh
+
+1.	The engine supports real-time refresh. After the engine is placed in the corresponding directory, Linkis1.0 provides a method to load the engine without shutting down the server, and just send a request to the linkis-engineconn-plugin-server service through the restful interface, that is, the actual deployment of the service Ip+port, the request interface is http://ip:port/api/rest_j/v1/rpc/receiveAndReply, the request method is POST, the request body is {"method":"/enginePlugin/engin [...]
+
+2.	Restart refresh: the engine catalog can be forced to refresh by restarting
+
+```
+### cd to the sbin directory, restart linkis-engineconn-plugin-server
+cd /Linkis1.0.0/sbin
+## Execute linkis-daemon script
+sh linkis-daemon.sh restart linkis-engine-plugin-server
+```
+
+3.Check whether the engine refresh is successful: If you encounter problems during the refresh process and need to confirm whether the refresh is successful, you can check whether the last_update_time of the linkis_engine_conn_plugin_bml_resources table in the database is the time when the refresh is triggered.
diff --git "a/Linkis-Doc-master/en_US/Deployment_Documents/Images/\345\210\206\345\270\203\345\274\217\351\203\250\347\275\262\345\276\256\346\234\215\345\212\241.png" "b/Linkis-Doc-master/en_US/Deployment_Documents/Images/\345\210\206\345\270\203\345\274\217\351\203\250\347\275\262\345\276\256\346\234\215\345\212\241.png"
new file mode 100644
index 0000000..8cd86c5
Binary files /dev/null and "b/Linkis-Doc-master/en_US/Deployment_Documents/Images/\345\210\206\345\270\203\345\274\217\351\203\250\347\275\262\345\276\256\346\234\215\345\212\241.png" differ
diff --git a/Linkis-Doc-master/en_US/Deployment_Documents/Installation_Hierarchical_Structure.md b/Linkis-Doc-master/en_US/Deployment_Documents/Installation_Hierarchical_Structure.md
new file mode 100644
index 0000000..3873f0a
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Deployment_Documents/Installation_Hierarchical_Structure.md
@@ -0,0 +1,198 @@
+Installation directory structure
+============
+
+The directory structure of Linkis1.0 is very different from the 0.X version. Each microservice in 0.X has a root directory that exists independently. The main advantage of this directory structure is that it is easy to distinguish microservices and facilitate individual Microservices are managed, but there are some obvious problems:
+
+1.	The microservice catalog is too complicated and it is not convenient to switch catalog management
+2.	There is no unified startup script, which makes it more troublesome to start and stop microservices
+3.	There are a large number of duplicate service configurations, and the same configuration often needs to be modified in many places
+4.	There are a large number of repeated Lib dependencies, which increases the size of the installation package and the risk of dependency conflicts
+
+Therefore, in Linkis 1.0, we have greatly optimized and adjusted the installation directory structure, reducing the number of microservice directories, reducing the jar packages that are repeatedly dependent, and reusing configuration files and microservice management scripts as much as possible. Mainly reflected in the following aspects:
+
+1.The bin folder is no longer provided for each microservice, and modified to be shared by all microservices.
+> The Bin folder is modified to the installation directory, which is mainly used to install Linkis1.0 and check the environment status. The new sbin directory provides one-click start and stop for Linkis, and provides independent start and stop for all microservices by changing parameters.
+
+2.No longer provide a separate conf directory for each microservice, and modify it to be shared by all microservices.
+> The Conf folder contains two aspects of content. On the one hand, it is the configuration information shared by all microservices. This type of configuration information contains information that users can customize configuration according to their own environment; on the other hand, it is the special characteristics of each microservice. Configuration, under normal circumstances, users do not need to change by themselves.
+
+3.The lib folder is no longer provided for each microservice, and modified to be shared by all microservices
+> The Lib folder also contains two aspects of content, on the one hand, the common dependencies required by all microservices; on the other hand, the special dependencies required by each microservice.
+
+4.The log directory is no longer provided for each microservice, modified to be shared by all microservices
+> The Log directory contains log files of all microservices.
+
+The simplified directory structure of Linkis1.0 is as follows.
+
+````
+├── bin ──installation directory
+│ ├── checkEnv.sh ── Environmental variable detection
+│ ├── checkServices.sh ── Microservice status check
+│ ├── common.sh ── Some public shell functions
+│ ├── install-io.sh ── Used for dependency replacement during installation
+│ └── install.sh ── Main script of Linkis installation
+├── conf ──configuration directory
+│ ├── application-eureka.yml 
+│ ├── application-linkis.yml    ──Microservice general yml
+│ ├── linkis-cg-engineconnmanager-io.properties
+│ ├── linkis-cg-engineconnmanager.properties
+│ ├── linkis-cg-engineplugin.properties
+│ ├── linkis-cg-entrance.properties
+│ ├── linkis-cg-linkismanager.properties
+│ ├── linkis-computation-governance
+│ │   └── linkis-client
+│ │       └── linkis-cli
+│ │           ├── linkis-cli.properties
+│ │           └── log4j2.xml
+│ ├── linkis-env.sh   ──linkis environment properties
+│ ├── linkis-et-validator.properties
+│ ├── linkis-mg-gateway.properties
+│ ├── linkis.properties  ──linkis global properties
+│ ├── linkis-ps-bml.properties
+│ ├── linkis-ps-cs.properties
+│ ├── linkis-ps-datasource.properties
+│ ├── linkis-ps-publicservice.properties
+│ ├── log4j2.xml
+│ ├── proxy.properties(Optional)
+│ └── token.properties(Optional)
+├── db ──database DML and DDL file directory
+│ ├── linkis\_ddl.sql ──Database table definition SQL
+│ ├── linkis\_dml.sql ──Database table initialization SQL
+│ └── module ──Contains DML and DDL files of each microservice
+├── lib ──lib directory
+│ ├── linkis-commons ──Common dependency package
+│ ├── linkis-computation-governance ──The lib directory of the computing governance module
+│ ├── linkis-engineconn-plugins ──lib directory of all EngineConnPlugins
+│ ├── linkis-public-enhancements ──lib directory of public enhancement services
+│ └── linkis-spring-cloud-services ──SpringCloud lib directory
+├── logs ──log directory
+│ ├── linkis-cg-engineconnmanager-gc.log
+│ ├── linkis-cg-engineconnmanager.log
+│ ├── linkis-cg-engineconnmanager.out
+│ ├── linkis-cg-engineplugin-gc.log
+│ ├── linkis-cg-engineplugin.log
+│ ├── linkis-cg-engineplugin.out
+│ ├── linkis-cg-entrance-gc.log
+│ ├── linkis-cg-entrance.log
+│ ├── linkis-cg-entrance.out
+│ ├── linkis-cg-linkismanager-gc.log
+│ ├── linkis-cg-linkismanager.log
+│ ├── linkis-cg-linkismanager.out
+│ ├── linkis-et-validator-gc.log
+│ ├── linkis-et-validator.log
+│ ├── linkis-et-validator.out
+│ ├── linkis-mg-eureka-gc.log
+│ ├── linkis-mg-eureka.log
+│ ├── linkis-mg-eureka.out
+│ ├── linkis-mg-gateway-gc.log
+│ ├── linkis-mg-gateway.log
+│ ├── linkis-mg-gateway.out
+│ ├── linkis-ps-bml-gc.log
+│ ├── linkis-ps-bml.log
+│ ├── linkis-ps-bml.out
+│ ├── linkis-ps-cs-gc.log
+│ ├── linkis-ps-cs.log
+│ ├── linkis-ps-cs.out
+│ ├── linkis-ps-datasource-gc.log
+│ ├── linkis-ps-datasource.log
+│ ├── linkis-ps-datasource.out
+│ ├── linkis-ps-publicservice-gc.log
+│ ├── linkis-ps-publicservice.log
+│ └── linkis-ps-publicservice.out
+├── pid ──Process ID of all microservices
+│ ├── linkis\_cg-engineconnmanager.pid ──EngineConnManager microservice
+│ ├── linkis\_cg-engineconnplugin.pid ──EngineConnPlugin microservice
+│ ├── linkis\_cg-entrance.pid ──Engine entrance microservice
+│ ├── linkis\_cg-linkismanager.pid ──linkis manager microservice
+│ ├── linkis\_mg-eureka.pid ──eureka microservice
+│ ├── linkis\_mg-gateway.pid ──gateway microservice
+│ ├── linkis\_ps-bml.pid ──material library microservice
+│ ├── linkis\_ps-cs.pid ──Context microservice
+│ ├── linkis\_ps-datasource.pid ──Data source microservice
+│ └── linkis\_ps-publicservice.pid ──public microservice
+└── sbin ──microservice start and stop script directory
+    ├── ext ──Start and stop script directory of each microservice
+    ├── linkis-daemon.sh ── Quick start and stop, restart a single microservice script
+    ├── linkis-start-all.sh ── Start all microservice scripts with one click
+    └── linkis-stop-all.sh ── Stop all microservice scripts with one click
+````
+
+# Configuration item modification
+
+After executing the install.sh in the bin directory to complete the Linkis installation, you need to modify the configuration items. All configuration items are located in the con directory. Normally, you need to modify the three configurations of db.sh, linkis.properties, and linkis-env.sh For documentation, project installation and configuration, please refer to the article "Linkis1.0 Installation"
+
+# Microservice start and stop
+
+After modifying the configuration items, you can start the microservice in the sbin directory. The names of all microservices are as follows:
+
+````
+├── linkis-cg-engineconnmanager  ──engine management service
+├── linkis-cg-engineplugin  ──EngineConnPlugin management service
+├── linkis-cg-entrance  ──computing governance entrance service
+├── linkis-cg-linkismanager  ──computing governance management service
+├── linkis-mg-eureka  ──microservice registry service
+├── linkis-mg-gateway  ──Linkis gateway service
+├── linkis-ps-bml  ──material library service
+├── linkis-ps-cs  ──context service
+├── linkis-ps-datasource  ──data source service
+└── linkis-ps-publicservice  ──public service
+````
+**Microservice abbreviation**:
+
+| Abbreviation | Full English Name | Full Chinese Name |
+|------|-------------------------|------------|
+| cg | Computation Governance | Computing Governance |
+| mg | Microservice Covernance | Microservice Governance |
+| ps | Public Enhancement Service | Public Enhancement Service |
+
+In the past, to start and stop a single microservice, you need to enter the bin directory of each microservice and execute the start/stop script. When there are many microservices, it is troublesome to start and stop. A lot of additional directory switching operations are added. Linkis1.0 will all The scripts related to the start and stop of microservices are placed in the sbin directory, and only a single entry script needs to be executed.
+
+**Under the Linkis/sbin directory**:
+
+1.Start all microservices at once:
+
+````
+sh linkis-start-all.sh
+````
+
+2.Shut down all microservices at once
+
+````
+sh linkis-stop-all.sh
+````
+
+3.Start a single microservice (the service name needs to be removed from the linkis prefix, such as mg-eureka)
+````
+sh linkis-daemon.sh start service-name
+````
+For example: 
+````
+sh linkis-daemon.sh start mg-eureka
+````
+
+4.Shut down a single microservice
+````
+sh linkis-daemon.sh stop service-name
+````
+For example: 
+````
+sh linkis-daemon.sh stop mg-eureka
+````
+
+5.Restart a single microservice
+````
+sh linkis-daemon.sh restart service-name
+````
+For example: 
+````
+sh linkis-daemon.sh restart mg-eureka
+````
+
+6.View the status of a single microservice
+````
+sh linkis-daemon.sh status service-name
+````
+For example: 
+````
+sh linkis-daemon.sh status mg-eureka
+````
diff --git a/Linkis-Doc-master/en_US/Deployment_Documents/Quick_Deploy_Linkis1.0.md b/Linkis-Doc-master/en_US/Deployment_Documents/Quick_Deploy_Linkis1.0.md
new file mode 100644
index 0000000..b74dbd9
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Deployment_Documents/Quick_Deploy_Linkis1.0.md
@@ -0,0 +1,246 @@
+# Linkis1.0 Deployment document
+
+## Notes
+
+If you are new to Linkis, you can ignore this chapter, however, if you are already a Linkis user,  we recommend you reading the following article before installing or upgrading: [Brief introduction of the difference between Linkis1.0 and Linkis0.X](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/en_US/Architecture_Documents/DifferenceBetween1.0%260.x.md).
+
+Please note: Apart from the four EngineConnPlugins included in the Linkis1.0 installation package by default: Python/Shell/Hive/Spark. You can manually install other types of engines such as JDBC depending on your own needs. For details, please refer to EngineConnPlugin installation documents.
+
+Engines that Linkis1.0 has adapted by default are listed below:
+
+| Engine Type   | Adaptation Situation   | Included in official installation package |
+| ------------- | ---------------------- | ----------------------------------------- |
+| Python        | Adapted in 1.0         | Included                                  |
+| JDBC          | Adapted in 1.0         | **Not Included**                          |
+| Shell         | Adapted in 1.0         | Included                                  |
+| Hive          | Adapted in 1.0         | Included                                  |
+| Spark         | Adapted in 1.0         | Included                                  |
+| Pipeline      | Adapted in 1.0         | **Not Included**                          |
+| Presto        | **Not adapted in 1.0** | **Not Included**                          |
+| ElasticSearch | **Not adapted in 1.0** | **Not Included**                          |
+| Impala        | **Not adapted in 1.0** | **Not Included**                          |
+| MLSQL         | **Not adapted in 1.0** | **Not Included**                          |
+| TiSpark       | **Not adapted in 1.0** | **Not Included**                          |
+
+## 1. Determine your installation environment 
+
+The following is the dependency information for each engine.
+
+| Engine Type | Dependency                  | Special Instructions                                         |
+| ----------- | --------------------------- | ------------------------------------------------------------ |
+| Python      | Python Environment          | If the path of logs and result sets are configured as hdfs://, then the HDFS environment is needed. |
+| JDBC        | No dependency               | If the path of logs and result sets are configured as hdfs://, then the HDFS environment is needed. |
+| Shell       | No dependency               | If the path of logs and result sets are configured as hdfs://, then the HDFS environment is needed. |
+| Hive        | Hadoop and Hive Environment |                                                              |
+| Spark       | Hadoop/Hive/Spark           |                                                              |
+                                                         
+**Requirement: At least 3G memory is required to install Linkis. **
+                                                         
+The default JVM heap memory of each microservice is 512M, and the heap memory of each microservice can be adjusted uniformly by modifying `SERVER_HEAP_SIZE`.If your computer resources are small, we suggest to modify this parameter to 128M. as follows:
+
+```bash
+    vim ${LINKIS_HOME}/config/linkis-env.sh
+```
+
+```bash
+    # java application default jvm memory.
+    export SERVER_HEAP_SIZE="128M"
+```
+
+----
+
+## 2. Linkis environment preparation
+
+### a. Fundamental software installation
+
+The following softwares must be installed:
+
+- MySQL (5.5+), How to install MySQL
+- JDK (1.8.0_141 or higher) How to install JDK
+
+### b. Create user
+
+For example: **The deploy user is hadoop**.
+
+1. Create a deploy user on the machine for installation.
+
+```bash
+    sudo useradd hadoop  
+```
+
+2. Since the services of Linkis use  sudo -u {linux-user} to switch engines to execute jobs, the deploy user should have sudo permission and do not need to enter the password.
+
+```bash
+    vi /etc/sudoers
+```
+
+```text
+    hadoop  ALL=(ALL)       NOPASSWD: NOPASSWD: ALL
+```
+
+3. **Set the following global environment variables on each installation node so that Linkis can use Hadoop, Hive and Spark.**
+
+   Modify the .bash_rc of the deploy user, the command is as follows:
+
+```bash     
+    vim /home/hadoop/.bash_rc ##Take the deploy user hadoop as an example.
+```
+
+​		The following is an example of setting environment variables:
+
+```bash
+    #JDK
+    export JAVA_HOME=/nemo/jdk1.8.0_141
+
+    ##If you do not use Hive, Spark or other engines and do not rely on Hadoop as 			well,then there is no need to modify the following environment variables.
+    #HADOOP  
+    export HADOOP_HOME=/appcom/Install/hadoop
+    export HADOOP_CONF_DIR=/appcom/config/hadoop-config
+    #Hive
+    export HIVE_HOME=/appcom/Install/hive
+    export HIVE_CONF_DIR=/appcom/config/hive-config
+    #Spark
+    export SPARK_HOME=/appcom/Install/spark
+    export SPARK_CONF_DIR=/appcom/config/spark-config/spark-submit
+    export PYSPARK_ALLOW_INSECURE_GATEWAY=1  # Parameters must be added to Pyspark
+```
+
+4. **If you want to equip your Pyspark and Python with drawing functions, you need to install the drawing module on each installation node**. The command is as follows:
+
+```bash
+    python -m pip install matplotlib
+```
+
+### c. Preparing installation package
+
+Download the latest installation package from the Linkis release. ([Click here to enter the download page](https://github.com/WeBankFinTech/Linkis/releases))
+
+Decompress the installation package to the installation directory and modify the configuration of the decompressed file.
+
+```bash   
+    tar -xvf  wedatasphere-linkis-x.x.x-combined-package-dist.tar.gz
+```
+
+### d. Basic configuration modification(Do not rely on HDFS)
+
+```bash
+    vi config/linkis-env.sh
+```
+
+```properties
+
+    #SSH_PORT=22        #Specify SSH port. No need to configuer if the stand-alone version is installed
+    deployUser=hadoop      #Specify deploy user
+    LINKIS_INSTALL_HOME=/appcom/Install/Linkis    # Specify installation directory.
+    WORKSPACE_USER_ROOT_PATH=file:///tmp/hadoop    # Specify user root directory. Generally used to store user's script and log files, it's user's workspace. 
+    RESULT_SET_ROOT_PATH=file:///tmp/linkis   # The result set file path, used to store the result set files of the Job.
+	ENGINECONN_ROOT_PATH=/appcom/tmp #Store the installation path of ECP. A local directory where deploy user has write permission.
+    ENTRANCE_CONFIG_LOG_PATH=file:///tmp/linkis/  #Entrance's log path
+
+    ## LDAP configuration. Linkis only supports deploy user login by default, you need to configure the following parameters to support multi-user login.
+    #LDAP_URL=ldap://localhost:1389/ 
+    #LDAP_BASEDN=dc=webank,dc=com
+```
+
+### e. Basic configuration modification(Rely on HDFS/Hive/Spark)
+
+```bash
+     vi config/linkis-env.sh
+```
+
+```properties
+    SSH_PORT=22       #Specify SSH port. No need to configuer if the stand-alone version is installed
+    deployUser=hadoop      #Specify deploy user
+    WORKSPACE_USER_ROOT_PATH=file:///tmp/hadoop     #Specify user root directory. Generally used to store user's script and log files, it's user's workspace.
+    RESULT_SET_ROOT_PATH=hdfs:///tmp/linkis   # The result set file path, used to store the result set files of the Job.
+	ENGINECONN_ROOT_PATH=/appcom/tmp #Store the installation path of ECP. A local directory where deploy user has write permission.
+    ENTRANCE_CONFIG_LOG_PATH=hdfs:///tmp/linkis/  #Entrance's log path
+
+    #1.0 supports multi-Yarn clusters, therefore, YARN_RESTFUL_URL must be configured
+ 	YARN_RESTFUL_URL=http://127.0.0.1:8088  #URL of Yarn's ResourceManager
+
+    # If you want to use it with Scriptis, for CDH version of hive, you need to set the following parameters.(For the community version of Hive, you can leave out the following configuration.)
+    HIVE_META_URL=jdbc://...   #URL of Hive metadata database
+    HIVE_META_USER=   # username of the Hive metadata database 
+    HIVE_META_PASSWORD=    # password of the Hive metadata database
+    
+    # set the conf directory of hadoop/hive/spark
+    HADOOP_CONF_DIR=/appcom/config/hadoop-config  #hadoop's conf directory
+    HIVE_CONF_DIR=/appcom/config/hive-config   #hive's conf directory
+    SPARK_CONF_DIR=/appcom/config/spark-config #spark's conf directory
+
+    ## LDAP configuration. Linkis only supports deploy user login by default, you need to configure the following parameters to support multi-user login.
+    #LDAP_URL=ldap://localhost:1389/ 
+    #LDAP_BASEDN=dc=webank,dc=com
+    
+    ##If your spark version is not 2.4.3, you need to modify the following parameter:
+    #SPARK_VERSION=3.1.1
+
+    ##:If your hive version is not 1.2.1, you need to modify the following parameter:
+    #HIVE_VERSION=2.3.3
+```
+
+### f. Modify the database configuration
+
+```bash   
+    vi config/db.sh 
+```
+
+```properties    
+
+    # set the connection information of the database
+    # including ip address, database's name, username and port
+    # Mainly used to store user's customized variables, configuration parameters, UDFs, and samll functions, and to provide underlying storage of the JobHistory.
+    MYSQL_HOST=
+    MYSQL_PORT=
+    MYSQL_DB=
+    MYSQL_USER=
+    MYSQL_PASSWORD=
+```
+
+## 3. Installation and Startup
+
+### 1. Execute the installation script:
+
+```bash
+    sh bin/install.sh
+```
+
+### 2. Installation steps
+
+- The install.sh script will ask you whether to initialize the database and import the metadata. 
+
+It is possible that a user might repeatedly run the install.sh script and results in clearing all data in databases. Therefore, each time the install.sh is executed, user will be asked if they need to initialize the database and import the metadata.
+
+Please select yes on the **first installation**.
+
+**Please note: If you are upgrading the existing environment of Linkis from 0.X to 1.0, please do not choose yes directly,  refer to Linkis1.0 Upgrade Guide first.**
+
+### 3. Whether install successfully 
+
+You can check whether the installation is successful or not by viewing the logs printed on the console. 
+
+If there is an error message, check the specific reason for that error or refer to FAQ for help.
+
+### 4. Linkis quick startup
+
+(1). Start services
+
+Run the following commands on the installation directory to start all services.
+
+```bash  
+  sh sbin/linkis-start-all.sh
+```
+
+(2). Check if start successfully 
+
+You can check the startup status of the services on the Eureka, here is the way to check:
+
+Open http://${EUREKA_INSTALL_IP}:${EUREKA_PORT} on the browser and check if services have registered successfully. 
+
+If you have not specified EUREKA_INSTALL_IP and EUREKA_INSTALL_IP in config.sh, then the HTTP address is http://127.0.0.1:20303
+
+As shown in the figure below, if all of the following micro-services are registered on theEureka, it means that they've started successfully and are able to work.
+
+![Linkis1.0_Eureka](../Images/deployment/Linkis1.0_combined_eureka.png)
+
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Contributing.md b/Linkis-Doc-master/en_US/Development_Documents/Contributing.md
new file mode 100644
index 0000000..28ea896
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Contributing.md
@@ -0,0 +1,195 @@
+# Contributing
+
+Thank you very much for contributing to the Linkis project! Before participating in the contribution, please read the following guidelines carefully.
+
+## 1. Contribution category
+
+### 1.1 Bug feedback and fix
+
+We suggest that whether it is bug feedback or repair, you should create an issue first to describe the status of the bug in detail, so as to help the community to find and review issues and codes through issue records. Bug feedback issues usually need to include a complete description
+**Bug** information and reproducible scenarios, so that the community can quickly locate the cause of the bug and fix it. Opened issues that contain #bug label all need to be fixed.
+
+### 1.2 Functional communication, implementation and refactoring
+
+In the communication process, please elaborate the details, mechanisms and using scenarios of the new function(or refactoring). This can promote the function(or refactoring) to be implemented better and faster.
+If you plan to implement a major feature (or refactoring), be sure to communicate with the team through **Issue** or other methods, so that everyone can move forward in the most efficient way. An open Issue containing the #feature tag means that there are new functions need to be implemented. And open issues including #Enhancement tags always means that needs to be improved for refactoring.
+
+
+### 1.3 Issue Q&A
+
+Helping to answer the usage questions in the Issue is a very valuable way to contribute to the Linkis community; There will always be new users keeping coming in. While helping new users, you can also show your expertise.
+
+### 1.4 Documentation improvements
+
+Linkis User Manual Documents are maintained in the Linkis-Doc project of github, you can edit the markdown file in the project and improve the document by submit a pr.
+
+## 2. Contribution process
+
+### 2.1 Branch structure
+
+The Linkis source code may contain some temporary branches, but there are only three branches as followed that are really meaningful:
+
+```
+master: The source code of the last stable release, and occassionally may have several hotfix submissions
+branch-0.10.0: The latest stable version
+dev-1.0.0: Main development branch
+```
+
+### 2.2 Development Guidelines
+
+Linkis front-end and back-end code share the same code repository, but they are separated in development. Before embarking on development, please fork a copy of Linkis project to your own Github Repositories. When developing, please do it based on your own Github Repositories.
+
+We recommend cloning the dev-1.0.0 branch for development, so there will be much less conflicts on merging when submitting a PR to the Linkis main project
+Much smaller
+
+```
+git clone https://github.com/yourname/Linkis.git --branch dev-1.0.0
+```
+
+#### 2.2.1 Backend
+
+The user configuration is under the project root directory /config/, the project startup script and the upgrade patch script are under the project root directory /bin/.
+The back-end code and core configuration are in the server/ directory, and the log is in the project root directory /log/. 
+The root directory of the project mentioned here refers to the directory configured by the environment variable LINKIS_HOME, and the environment variable needs to be configured during the development of the IDE.
+For example, Idea regarding the priority of environment variable loading from  high to low: Environment configured in Run/Debug Configurations
+variables —> System environment variables cached by the IDE.
+
+**2.2.1.1** Directory structure
+
+```
+1. Script
+```
+```
+├── assembly-package/bin # script directory
+ ├── install.sh # One-click deployment script
+ ├── checkEnv.sh # Environment check script
+ └── common.sh # Common script function
+```
+```
+├── sbin # script directory
+ ├── linkis-daemon.sh # Single service start and stop, status detection script
+ ├── linkis-start-all.sh # One-click start script
+ ├── linkis-stop-all.sh # One-click stop script
+ └── ext # Separate service script directory
+    ├── linkis-xxx.sh # The startup script of a service
+    ├── linkis-xxx.sh
+    ├── ...
+```
+    
+```
+2. Configuration
+```
+```
+├── assembly-package/config # User configuration directory
+ ├── linkis-env.sh # Configuration variable settings for one-click deployment
+ ├── db.sh # One-click deployment database configuration
+```
+```
+3. Code directory structure
+See Linkis code directory structure for details
+4. Log directory
+```
+```
+├── logs # log root directory
+```
+**2.2.1.2** Environment variables
+
+
+```
+Configure system environment variable or IDE environment variable LINKIS_HOME, it is recommended to use IDE environment variable first.
+```
+**2.2.1.3** Database
+
+```
+1. Create the Linkis system database by yourself;
+2. Modify the corresponding information of the database in conf/db.sh and execute bin/install.sh or import directly on the database client
+db/linkis_*.sql.
+```
+
+**2.2.1.4** Configuration file
+
+Modify the application-linkis.yml file in the conf directory and the properties file corresponding to each microservice name to configure related properties.
+
+**2.2.1.5** Packaging
+
+```
+1. To package the project, you need to modify the version in /assembly/src/main/assembly/assembly.xml in the root directory, and then execute the following command in the root directory: mvn clean package;
+To package a single module, simply run mvn clean package directly in each module.
+```
+### 2.3 Pull Request Guidelines
+
+#### If you still don’t know how to initiate a PR to an open source project, please refer to this description
+
+```
+Whether it is bug fixes or new feature development, please submit a PR to the dev-1.0.0 branch.
+PR and submission name follow the principle of <type>(<scope>): <subject>. For details, please refer to Ruan Yifeng's article [Commitmessage and Change log Compilation Guide](http://www.ruanyifeng.com/blog/2016/01/commit_message_change_log.html).
+If the PR contains new features, the document update should be included in this PR.
+If this PR is not ready to merge, please add the [WIP] prefix to the head of the name (WIP = work-in-progress).
+All submissions to the dev-1.0.0 branch need to go through at least one review before they can be merged
+```
+### 2.4 Review Standard
+
+Before contributing code, you can find out what kind of submissions are popular in Review. Simply put, if a submission can bring as many gains as possible and as few side effects or risks as possible, then it will be reviewd and merged first. Submissions with high risk and low value are almost impossible to be merged, and may be rejected without even a chance to review. 
+
+**2.4.1** Gain
+
+```
+Fix the main cause of the bug
+Add or fix a feature or problem that a large number of users urgently need
+Simple and effective
+Easy to test, with test cases
+Reduce complexity and amount of code
+```
+
+#### Issues that have been discussed by the community and identified for improvement
+
+#### 2.4.2 Side effects and risks
+
+```
+Only fix the surface phenomenon of the bug
+Introduce new features with high complexity
+Add complexity to meet niche needs
+Change stable existing API or semantics
+Cause other functions to not operate normally
+Add a lot of dependencies
+Change the dependency version at will
+Submit a large amount of code or changes at once
+```
+**2.4.3 Reviewer** Note
+
+```
+Please use a constructive tone to write comments
+If you need to make changes by the submitter, please clearly state all the content that needs to be modified to complete the Pull Request
+If a PR is found to have brought new problems after the merger, the Reviewer needs to contact the PR author and communicate to resolve the problem.
+Question; if the PR author cannot be contacted, the Reviewer needs to restore the PR
+```
+## 3. advanced contribution
+
+### 3.1 About Committers (Collaborators)
+
+**3.1.1** How to become a **committer**
+
+If you have had a valuable PR for the Linkis code and it has been merged, you can contact the core development team through the official WeChat group
+Team applied to be the Committer of the Linkis project; the core development team and other Committers will vote together to decide whether or not allow you to join. If you get enough votes, you will become a Committer for the Linkis project.
+
+**3.1.2 Committer** Rights
+
+```
+You can join the official developer WeChat group, participate in discussions and make development plans
+Can manage Issues, including closing and adding tags
+Can create and manage project branches, except for master and dev-1.0.0 branches
+Can review the PR submitted to the dev-1.0.0 branch
+Can apply to be a member of Committee
+```
+### 3.2 About Committee
+
+**3.2.1** How to become a **Committee** member
+
+
+If you are a Committer of the Linkis project, and all your contributions have been recognized by other Committee members. Yes, you can apply to be a member of the Linkis Committee, and other Committee members will vote together to decide whether to allow you to join in, and if unanimously approved, you will become a member of the Linkis Committee.
+
+**3.2.2 Committee members' rights
+
+```
+You can merge PRs submitted by other Committers and contributors to the dev-1.0.0 branch
+```
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/API.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/API.md
new file mode 100644
index 0000000..f91f8ba
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/API.md
@@ -0,0 +1,143 @@
+ > When Contributor contributes new RESTful interfaces to Linkis, it is required to follow the following interface specifications for interface development.
+
+
+
+## 1. HTTP or WebSocket ?
+
+
+
+Linkis currently provides two interfaces: HTTP and WebSocket.
+
+
+
+WebSocket advantages over HTTP:
+
+
+
+- Less stress on the server
+
+- More timely information push
+
+- Interactivity is more friendly
+
+
+
+Correspondingly, WebSocket has the following disadvantages:
+
+
+
+- The WebSocket may be disconnected while using
+
+- Higher technical requirements on the front end
+
+- It is generally required to have a front-end degradation handling mechanism
+
+
+
+**We generally strongly recommend that Contributor provide the interface using WebSocket as little as possible if not necessary;**
+
+
+
+**If you think it is necessary to use WebSocket and are willing to contribute the developed functions to Linkis, we suggest you communicate with us before the development, thank you!**
+
+
+
+## 2. URL specification
+
+
+
+```
+
+/api/rest_j/v1/{applicationName}/.+
+
+/api/rest_s/v1/{applicationName}/.+
+
+```
+
+
+
+**Convention** :
+
+
+
+- rest_j indicates that the interface complies with the Jersey specification
+
+- REST_S indicates that the interface complies with the SpringMVC REST specification
+
+- v1 is the version number of the service. ** version number will be updated with the Linkis version **
+
+- {applicationName} is the name of the micro-service
+
+
+
+## 3. Interface request format
+
+
+
+```json
+
+{
+
+"method":"/api/rest_j/v1/entrance/execute",
+
+"data":{},
+
+"WebsocketTag" : "37 fcbd8b762d465a0c870684a0261c6e" / / WebSocket requests require this parameter, HTTP requests can ignore
+
+}
+
+```
+
+
+
+**Convention** :
+
+
+
+- method: The requested RESTful API URL.
+
+- data: The specific data requested.
+
+- WebSocketTag: The unique identity of a WebSocket request. This parameter is also returned by the back end for the front end to identify.
+
+
+
+## 4. Interface response format
+
+
+
+```json
+
+{" method ":"/API/rest_j/v1 / project/create ", "status" : 0, "message" : "creating success!" ,"data":{}}
+
+```
+
+
+
+**Convention** :
+
+
+
+- method: Returns the requested RESTful API URL, mainly for the WebSocket mode.
+
+- status: Returns status information, where: -1 means not login, 0 means success, 1 means error, 2 means failed validation, and 3 means no access to the interface.
+
+- data: Returns the specific data.
+
+- message: Returns a prompt message for the request. If status is not 0, message will return an error message, where data may have a stack trace field, and return the specific stack information.
+
+
+
+In addition: Different status cause different HTTP status code, under normal circumstances:
+
+
+
+- When status is 0, the HTTP status code is 200
+
+- When the status is -1, the HTTP status code is 401
+
+- When status is 1, the HTTP status code is 400
+
+- When status is 2, the HTTP status code is 412
+
+- When status is 3, the HTTP status code is 403
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Concurrent.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Concurrent.md
new file mode 100644
index 0000000..8adf0d0
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Concurrent.md
@@ -0,0 +1,17 @@
+1. [**Compulsory**] Make sure getting a singleton object to be thread-safe. Operating inside singletons should also be kept thread-safe.
+
+
+
+2. [**Compulsory**] Thread resources must be provided through the thread pool, and it is not allowed to explicitly create threads in the application.
+
+
+
+3. SimpleDateFormat is a thread-unsafe class. It is recommended to use the DataUtils utility class.
+
+
+
+4. [**Compulsory**] At high concurrency, synchronous calls should consider the performance cost of locking. If you can use lockless data structures, don't use locks. If you can lock blocks, don't lock the whole method body. If you can use object locks, don't use class locks.
+
+
+
+5. [**Compulsory**] Use ThreadLocal as less as possible. Everytime using ThreadLocal and it holds an object which needs to be closed, remember to close it to release.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Catch.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Catch.md
new file mode 100644
index 0000000..b1a0030
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Catch.md
@@ -0,0 +1,9 @@
+1. [**Mandatory**] For the exception of each small module, a special exception class should be defined to facilitate the subsequent generation of error codes for users. It is not allowed to throw any RuntimeException or directly throw Exception.
+
+2. Try not to try-catch a large section of code. This is irresponsible. Please distinguish between stable code and non-stable code when catching. Stable code refers to code that will not go wrong anyway. For the catch of unstable code, try to distinguish the exception types as much as possible, and then do the corresponding exception handling.
+
+3. [**Mandatory**] The purpose of catching an exception is to handle it. Don't throw it away without handling it. If you don't want to handle it, please throw the exception to its caller. Note: Do not use e.printStackTrace() under any circumstances! The outermost business users must deal with exceptions and turn them into content that users can understand.
+
+4. The finally block must close the resource object and the stream object, and try-catch if there is an exception.
+
+5. [**Mandatory**] Prevent NullPointerException. The return value of the method can be null, and it is not mandatory to return an empty collection, or an empty object, etc., but a comment must be added to fully explain under what circumstances the null value will be returned. RPC and SpringCloud Feign calls all require non-empty judgments.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Throws.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Throws.md
new file mode 100644
index 0000000..ac8ed72
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Throws.md
@@ -0,0 +1,52 @@
+## How to define a new exception?
+
+
+
+- Customized exceptions must inherit one of LinkisretryException, WarnException, ErroException, or FatalException
+
+
+
+- Customized exceptions must contain error codes and error descriptions. If necessary, the IP address and process port where the exception occurred can also be encapsulated in the exception
+
+
+
+- Be careful with WarnException! An exception thrown by WarnException, if caught in a RESTful or RPC Receiver, does not throw a failure to the front end or sender, but only returns a warning message!
+
+
+
+- WarnException has an exception level of 1, ErroException has an exception level of 2, FatalException has an exception level of 3, and LinkisretryException has an exception level of 4
+
+
+
+| exception class| service |  error code  | error description|
+|:----  |:---   |:---   |:---   |
+| LinkisException | common | None | top level parent class inherited from the Exception, does not allow direct inheritance |
+| LinkisRuntimeException | common | None | top level parent class, inherited from RuntimeException, does not allow direct inheritance
+| WarnException | common | None | secondary level parent classes, inherit from LinkisRuntimeException. Warn level exception, must inherit this class directly or indirectly |
+| ErrorException | common | None | secondary level parent classes, inherited from LinkisException. Error exception, must inherit this class directly or indirectly |
+| FatalException | common | None | secondary level parent classes, inherited from LinkisException. Fatal level exception, must inherit this class directly or indirectly |
+| LinkisRetryException | common | None | secondary level parent classes, inherited from LinkisException. Retryable exceptions, must inherit this class directly or indirectly |
+
+
+
+## Module exception specification
+
+
+
+linkis-commons:10000-11000
+
+linkis-computattion-governace:11000-12000
+
+linkis-engineconn-plugins:12000-13000
+
+linkis-orchestrator:13000-14000
+
+linkis-public-enghancements:14000-15000
+
+linkis-spring-cloud-service:15100-15500
+
+linkis-extensions:15500-16000
+
+linkis-tuning:16100-16200
+
+linkis-user-control:16200-16300
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Log.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Log.md
new file mode 100644
index 0000000..34801bd
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Log.md
@@ -0,0 +1,13 @@
+1.	[**Convention**] Linkis chooses SLF4J and Log4J2 as the log printing framework, removing the logback in the Spring-Cloud package. Since SLF4J will randomly select a logging framework for binding, it is necessary to exclude bridging packages such as SLF4J-LOG4J after introducing new Maven packages in the future, otherwise log printing will be a problem. However, if the newly introduced Maven package depends on a package such as Log4J, do not exclude, otherwise the code may run with an error.
+
+2.	[**Configuration**] The log4j2 configuration file is default to log4j2.xml and needs to be placed in the classpath. If springcloud combination is needed, "logging:config:classpath:log4j2-spring.xml"(the location of the configuration file) can be added to application.yml.
+
+3.	[**Compulsory**] The API of the logging system (log4j2, Log4j, Logback) cannot be used directly in the class. For Scala code, force inheritance from Logging traits is required. For Java, use LoggerFactory.GetLogger(getClass).
+
+4.	[**Development Convention**] Since engineConn is started by engineConnManager from the command line, we specify the path of the log configuration file on the command line, and also modify the log configuration during the code execution. In particular, redirect the engineConn log to the system's standard out. So the log configuration file for the EngineConn convention is defined in the EnginePlugin and named log4j2-engineConn.xml (this is the convention name and cannot be changed).
+
+5.	[**Compulsory**] Strictly differentiate log levels. Fatal logs should be thrown and exited using System.out(-1) when the SpringCloud application is initialized. Error-level exceptions are those that developers must care about and handle. Do not use them casually. The WARN level is the logs of user action exceptions and some logs to troubleshoot bugs later. INFO is the key process log. Debug is a mode log, write as little as possible.
+
+6.	[**Compulsory**] Requirements: Every module must have INFO level log; Every key process must have INFO level log. The daemon thread must have a WARN level log to clean up resources, etc.
+
+7.	[**Compulsory**] Exception information should include two types of information: crime scene information and exception stack information. If not, then throw it by keyword. Example: logger.error(Parameters/Objects.toString + "_" + e.getMessage(), e);
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Path_Usage.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Path_Usage.md
new file mode 100644
index 0000000..b9c17d3
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Path_Usage.md
@@ -0,0 +1,15 @@
+Please note: Linkis provides a unified Storage module, so you must follow the Linkis path specification when using the path or configuring the path in the configuration file.
+
+
+
+1. [**Compulsory**]When using a file path, whether it is local, HDFS, or HTTP, the schema information must be included. Among them:
+
+    - The Scheme header for local file is: file:///;
+
+    - The Scheme header for HDFS is: hdfs:///;
+
+    - The Scheme header for HTTP is: http:///.
+
+
+
+2. There should be no special characters in the path. Try to use the combination of English characters, underline and numbers.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/README.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/README.md
new file mode 100644
index 0000000..bde3f2d
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/README.md
@@ -0,0 +1,9 @@
+In order to standardize Linkis's community development environment, improve the output quality of subsequent development iterations of Linkis, and standardize the entire development and design process of Linkis, it is strongly recommended that Contributors follow the following development specifications:
+- [Exception Handling Specification](./Exception_Catch.md)
+- [Throwing exception specification](./Exception_Throws.md)
+- [Interface Specification](./Development_Specification/API.md)
+- [Log constraint specification](./Development_Specification/Log.md)
+- [Concurrency Specification](./Concurrent.md)
+- [Path Specification](./Path_Usage.md)
+
+**Note**: The development specifications of the initial version of Linkis1.0 are relatively brief, and will continue to be supplemented and improved with the iteration of Linkis. Contributors are welcome to provide their own opinions and comments.
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compilation_Document.md b/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compilation_Document.md
new file mode 100644
index 0000000..ee8b1c6
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compilation_Document.md
@@ -0,0 +1,135 @@
+# Linkis compilation document
+
+## Directory
+
+- 1. How to compile the whole project of Linkis.
+- 2. How to compile a module.
+- 3. How to compile an engine.
+- 4. How to modify the version of Hadoop, Hive and Spark that Linkis depends on.
+
+## 1. Compile the whole project
+
+Environment requirements: The version of JDK must be **higher than JDK8**, both **Oracle/Sun** and **OpenJDK** are supported.
+
+After cloning the project from github, please use maven to compile the project. 
+
+**Please note**: We recommend you using Hadoop-2.7.2, Hive-1.2.1, Spark-2.4.3 and Scala-2.11.8 to compile the Linkis.
+
+If you want to use other version of Hadoop, Hive and Spark, please refer to: How to modify the version of Hadoop, Hive and Spark that Linkis depends on.
+
+(1) **If you are compiling the Linkis on your local machine for the first time, you must execute the following commands on the root directory beforehand:**
+
+```bash
+    cd wedatasphere-linkis-x.x.x
+    mvn -N  install
+```
+
+(2) Execute the following commands on the root directory:
+
+```bash
+    cd wedatasphere-linkis-x.x.x
+    mvn clean install
+```
+
+(3) Obtain installation package from the directory 'assembly-> target':
+
+```bash
+    ls wedatasphere-linkis-x.x.x/assembly/target/wedatasphere-linkis-x.x.x-dist.tar.gz
+```
+
+## 2. Compile a module
+
+After cloning project from github, please use maven to compile the project. 
+
+(1) **If you are compiling the Linkis on your local machine for the first time, you must execute the following commands on the root directory beforehand:**
+
+```bash
+    cd wedatasphere-linkis-x.x.x
+    mvn -N  install
+```
+
+(2) Switch to the corresponding module to compile. An example of compiling Entrance module is shown below.
+
+```bash   
+    cd wedatasphere-linkis-x.x.x/linkis-computation-governance/linkis-entrance
+    mvn clean install
+```
+
+(3) Obtain compiled installation package from 'target' directory in the corresponding module.
+
+```
+    ls wedatasphere-linkis-x.x.x/linkis-computation-governance/linkis-entrance/target/linkis-entrance.x.x.x.jar
+```
+
+## 3. Compile an engine
+
+An example of compiling the Spark engine is shown below:
+
+(1) **If you are compiling the Linkis on your local machine for the first time, you must execute the following commands on the root directory beforehand:**
+
+```bash
+    cd wedatasphere-linkis-x.x.x
+    mvn -N  install
+```
+
+(2) Switch to the directory where Spark engine locates and use the following commands to compile:
+
+```bash   
+    cd wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark
+    mvn clean install
+```
+
+(3) Obtained compiled installation package from 'target' directory in the corresponding module.
+
+```
+    ls wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark/target/linkis-engineplugin-spark-x.x.x.zip
+```
+
+How to install Spark engine separately? Please refer to Linkis EngineConnPlugin installation document.
+
+## 4. How to modify the version of Hadoop, Hive and Spark that Linkis depends on
+
+Please note: Since Hadoop is a fundamental service in big data area, Linkis must rely on it for compilation, while computing storage engines such as Spark and Hive are not. If you do not have requirements for a certain engine, then no need to set its engine version or compile its EngineConnPlugin.
+
+The way to modify the version of Hadoop is different from that of Spark, Hive and other computation engines. Please see instructions below:
+
+#### How to modify the version of Hadoop that Linkis relies on?
+
+Enter the root directory of the Linkis and manually modified the Hadoop version in pom.xml.
+
+```bash
+    cd wedatasphere-linkis-x.x.x
+    vim pom.xml
+```
+
+```xml
+    <properties>
+      
+        <hadoop.version>2.7.2</hadoop.version> <!--> Modify Hadoop version here <-->
+              
+        <scala.version>2.11.8</scala.version>
+        <jdk.compile.version>1.8</jdk.compile.version>
+              
+    </properties>
+```
+
+#### How to modify the version of Spark, Hive that Linkis relies on?
+
+Here is an example of modifying Spark version. Enter the directory where Spark engine locates and manually modify the Spark version in pom.xml.
+
+```bash
+    cd wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark
+    vim pom.xml
+```
+
+```xml
+    <properties>
+      
+        <spark.version>2.4.3</spark.version>  <!--> Modify Spark version here <-->
+              
+    </properties>
+```
+
+Modifying  the version of other engines is similar to that of Spark. Enter the directory where  engine locates and manually modify the version in pom.xml.
+
+Then, please refer to How to compile an engine.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compile_and_Package.md b/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compile_and_Package.md
new file mode 100644
index 0000000..52928bf
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compile_and_Package.md
@@ -0,0 +1,155 @@
+# Linkis Compilation Document
+
+## directory
+
+- [1. Fully compile Linkis](#1-Fully-compile-Linkis)
+
+- [2. Build a single module](#2-Build-a-single-module)
+
+- [3. Build an engine](#3-Build-an-engine)
+
+- [4. How to Modify Linkis dependency versions of Hadoop, Hive, Spark](#4-How-to-Modify-Linkis-dependency-versions-of-Hadoop,-Hive,-Spark)
+
+## 1. Fully compile Linkis
+
+**Environment requirements:** Version of JDK must be higher then **JDK8**,  **Oracle/Sun** and **OpenJDK** are both supported.
+
+After getting the project code from Git, compile the project installation package using Maven.
+
+**Notice** : The official recommended versions for compiling Linkis are hadoop-2.7.2, hive-1.2.1, spark-2.4.3, and Scala-2.11.8.
+
+If you want to compile Linkis with another version of Hadoop, Hive, Spark, please refer to: [How to Modify Linkis dependency of Hadoop, Hive, Spark](#4 How to Modify Linkis dependency versionof Hadoop, Hive, Spark)
+
+(1) **If you compile it locally for the first time, you must execute the following command ** in the source package root directory of Linkis:**
+
+```bash
+cd wedatasphere-linkis-x.x.x
+mvn -N  install
+```
+
+(2) Execute the following command in the source package root directory of Linkis:
+
+```bash
+cd wedatasphere-linkis-x.x.x
+mvn clean install
+```
+
+(3) Get the installation package, in the project assembly->target directory:
+
+```bash
+ls wedatasphere-linkis-x.x.x/assembly/target/wedatasphere-linkis-x.x.x-dist.tar.gz
+```
+
+## 2. Compile a single module
+
+After getting the project code from Git, use Maven to package the project installation package.
+
+(1) **If you use it locally for the first time, you must execute the following command** in the source package root directory of Linkis:
+
+```bash
+cd wedatasphere-linkis-x.x.x
+mvn -N  install
+```
+
+(2) Go to the corresponding module for compilation. For example, if you want to recompile the Entrance, command as follows:
+
+```bash
+cd wedatasphere-linkis-x.x.x/linkis-computation-governance/linkis-entrance
+mvn clean install
+```
+
+(3) Get the installation package. The compiled package will be found in the ->target directory of the corresponding module:
+
+```
+ls wedatasphere-linkis-x.x.x/linkis-computation-governance/linkis-entrance/target/linkis-entrance.x.x.x.jar
+```
+
+## 3. Build an engine
+
+Here's an example of the Spark engine that builds Linkis:
+
+(1) **If you use it locally for the first time, you must execute the following command** in the source package root directory of Linkis:
+
+```bash
+cd wedatasphere-linkis-x.x.x
+mvn -N  install
+```
+
+(2) Jump to the directory where the Spark engine is located for compilation and packaging. The command is as follows:
+
+```bash
+cd wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark
+mvn clean install
+```
+
+(3) Get the installation package. The compiled package will be found in the ->target directory of the corresponding module:
+
+```
+ls  wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark/target/linkis-engineplugin-spark-x.x.x.zip
+```
+
+How do I install the Spark engine separately? Please refer to [Linkis Engine Plug-in Installation Documentation](.. / Deployment_Documents EngineConnPlugin engine plug-in installation document. Md)
+
+## 4. How to Modify Linkis dependency versions of Hadoop, Hive, Spark
+
+Please note: Hadoop is a big data basic service, Linkis must rely on Hadoop for compilation;
+If you don't want to use an engine, you don't need to set the version of the engine or compile the engine plug-in.
+
+Specifically, the version of Hadoop can be modified in a different way than Spark, Hive, and other computing engines, as described below:
+
+#### How do I modify the version of Hadoop that Linkis relies on?
+
+Enter the source package root directory of Linkis, and manually modify the Hadoop version information of the pom.xml file, as follows:
+
+```bash
+cd wedatasphere-linkis-x.x.x
+vim pom.xml
+```
+
+```xml
+<properties>
+    <hadoop.version>2.7.2</hadoop.version> <!--Change version of hadoop here-->
+    <scala.version>2.11.8</scala.version>
+    <jdk.compile.version>1.8</jdk.compile.version>
+ </properties>
+
+```
+
+**Please note: If your hadoop version is hadoop3, you need to modify the pom file of linkis-hadoop-common**
+Because under hadoop2.8, hdfs-related classes are in the hadoop-hdfs module, but in hadoop 3.X the corresponding classes are moved to the module hadoop-hdfs-client, you need to modify this file:
+
+```
+pom:Linkis/linkis-commons/linkis-hadoop-common/pom.xml
+Modify the dependency hadoop-hdfs to hadoop-hdfs-client:
+  <dependency>
+             <groupId>org.apache.hadoop</groupId>
+             <artifactId>hadoop-hdfs</artifactId> <!-- Replace this line with <artifactId>hadoop-hdfs-client</artifactId>-->
+             <version>${hadoop.version}</version>
+             ...
+  Modify hadoop-hdfs to:
+   <dependency>
+             <groupId>org.apache.hadoop</groupId>
+             <artifactId>hadoop-hdfs-client</artifactId>
+             <version>${hadoop.version}</version>
+             ...
+```
+
+#### How to modify Spark, Hive versions that Linkis relies on?
+
+Here's an example of changing the version of Spark. Go to the directory where the Spark engine is located and manually modify the Spark version information of the pom.xml file as follows:
+
+```bash
+cd wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark
+vim pom.xml
+```
+
+```xml
+<properties>
+    <spark.version>2.4.3</spark.version> <!-- Change the Spark version number here -->
+ </properties>
+
+```
+
+Modifying the version of another engine is similar to changing the Spark version by going to the directory where the engine is located and manually changing the engine version information in the pom.xml file.
+
+Then refer to  [Build an engine](#3 Build an engine).
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Linkis_DEBUG.md b/Linkis-Doc-master/en_US/Development_Documents/Linkis_DEBUG.md
new file mode 100644
index 0000000..34e1a88
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Linkis_DEBUG.md
@@ -0,0 +1,141 @@
+## 1 Preface
+&nbsp; &nbsp; &nbsp; &nbsp; Every Linkis micro service supports debugging, most of them support local debugging, some of them only support remote debugging.
+
+1. Services that support local debugging
+- linkis-mg-eureka: set of debugging Main class is `com.webank.Wedatasphere.Linkis.Eureka.SpringCloudEurekaApplication`
+- Other Linkis microservices have their own Main classes, as shown below
+linkis-cg-manager: `com.webank.wedatasphere.linkis.manager.am.LinkisManagerApplication`
+linkis-ps-bml: `com.webank.wedatasphere.linkis.bml.LinkisBMLApplication`
+linkis-ps-cs: `com.webank.wedatasphere.linkis.cs.server.LinkisCSApplication`
+linkis-cg-engineconnmanager: `com.webank.wedatasphere.linkis.ecm.server.LinkisECMApplication`
+linkis-cg-engineplugin: `com.webank.wedatasphere.linkis.engineplugin.server.LinkisEngineConnPluginServer`
+linkis-cg-entrance: `com.webank.wedatasphere.linkis.entrance.LinkisEntranceApplication`
+linkis-ps-publicservice: `com.webank.wedatasphere.linkis.jobhistory.LinkisPublicServiceAppp`
+linkis-ps-datasource: `com.webank.wedatasphere.linkis.metadata.LinkisDataSourceApplication`
+linkis-mg-gateway: `com.webank.wedatasphere.linkis.gateway.springcloud.LinkisGatewayApplication`
+
+2. Services that only support remote debugging:
+The EngineConnManager service and the Engine service started by ECM only support remote debugging.
+
+## 2. Local debugging service steps
+&nbsp; &nbsp; &nbsp; &nbsp; Linkis and DSS both rely on Eureka for their services, so you need to start the Eureka service first. The Eureka service can also use the Eureka that you have already started. Once Eureka is started, you can start other services.
+
+2.1 Eureka service start
+1. If you do not want the default port 20303, you can modify the port configuration:
+
+```yml
+File path: conf/application-eureka.yml
+Port to be modified in config file:
+
+server:
+    Port: 8080 # Port to setup
+```
+
+2. Then to add debug configuration in IDEA
+
+You can do this by clicking Run or by clicking Add Configuration in the image below
+
+![01](../Images/Tunning_and_Troubleshooting/debug-01.png)
+
+3. Then click Add Application and modify the information
+
+- Set the debug name first: Eureka, for example
+- Then set the Main class:
+`com.webank.wedatasphere.linkis.eureka.SpringCloudEurekaApplication`
+- Finally, set the Class Path for the service. For Eureka, the classPath module is linkis-eureka
+
+![02](../Images/Tunning_and_Troubleshooting/debug-02.png)
+
+4. Click the Debug button to start the Eureka service and access the Eureka page through [http://localhost:8080/](at)
+
+![03](.. /Images/Tunning_and_Troubleshooting/debug-03.png)
+
+2.2 Other services
+
+1. The Eureka configuration of the corresponding service needs to be modified. The Application.yml file needs to be modified
+
+```
+    conf/application-linkis.yml
+```
+Change the corresponding Eureka address to the Eureka service that has been started:
+
+```
+    eureka:
+    client:
+    serviceUrl:
+    defaultZone: http://localhost:8080/eureka/
+```
+
+2. Modify the configuration related to Linkis. The general configuration file is in conf/linkis.properties, and the corresponding configuration of each module is in the properties file beginning with the module name in conf directory.
+
+3. Then add debugging service
+
+The Main Class is uniformly set to its own Main Class for each module, which is listed in the foreword.
+The Class Path of the service is the corresponding module:
+
+```
+linkis-cg-manager: linkis-application-manager
+linkis-ps-bml: linkis-bml
+linkis-ps-cs: `com.webank.wedatasphere.linkis.cs.server.LinkisCSApplication`
+linkis-cg-engineconnmanager: linkis-cs-server
+linkis-cg-engineplugin: linkis-engineconn-plugin-server
+linkis-cg-entrance: linkis-entrance
+linkis-ps-publicservice: linkis-jobhistory
+linkis-ps-datasource: linkis-metadata
+linkis-mg-gateway: linkis-spring-cloud-gateway
+```
+
+And check provide:
+
+![06](../Images/Tunning_and_Troubleshooting/debug-06.png)
+
+4. Then start the service and you can see that the service is registered on the Eureka page:
+
+![05](../Images/Tunning_and_Troubleshooting/debug-05.png)
+
+Linkis-PS-PublicService should add a public-module Module to the POM.
+
+```
+<dependency>
+    <groupId>com.webank.wedatasphere.linkis</groupId>
+    <artifactId>public-module</artifactId>
+    <version>${linkis.version}</version>
+</dependency>
+```
+
+## 3. Steps of remote debugging service
+&nbsp; &nbsp; &nbsp; &nbsp; Each service supports remote debugging, but you need to turn it on ahead of time. There are two types of remote debugging, one is the remote debugging of Linkis common service, and the other is the remote debugging of EngineConn, which are described as follows:
+
+1. Remote debugging of common service:
+
+A. First, modify the startup script file of the corresponding service under sbin/ext directory, and add debug port:
+
+```
+export $SERVER_JAVA_OPTS =" -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=10092"
+```
+
+Added: '-agentlib: JDWP = Transport = DT_Socket, Server = Y, Suspend = N, Address =10092' where ports may conflict and can be changed to available ports.
+
+B. Create a new remote debug in IDEA. Select Remote first, then add host and port for the service, and then select the debug module
+
+![07](../Images/Tunning_and_Troubleshooting/debug-07.png)
+
+3. Then click the Debug button to complete the remote debugging
+
+![08](../Images/Tunning_and_Troubleshooting/debug-08.png)
+
+2. Remote debugging of engineConn:
+
+A. Add the following configuration items to the linkis-engineconn.properties file corresponding to EngineConn
+```
+wds.linkis.engineconn.debug.enable=true
+```
+
+This configuration item will randomly assign a debug port when engineConn starts.
+
+B. In the first line of the engineConn log, the actual assigned port is printed.
+```
+      Listening for transport dt_socket at address: 26072
+```
+
+C. Create a new remote debug in IDEA. The steps have been described in the previous section and will not be repeated here.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/New_EngineConn_Development.md b/Linkis-Doc-master/en_US/Development_Documents/New_EngineConn_Development.md
new file mode 100644
index 0000000..d45eedd
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/New_EngineConn_Development.md
@@ -0,0 +1,77 @@
+## How To Quickly Implement A New Engine
+
+To implement a new engine is to implement a new "EngineConnPlugin(ECP)" means engine plugin. Specific steps are as follows: 
+
+1.Create a new maven module and introduce the maven dependency of "ECP":
+```
+<dependency>
+<groupId>com.webank.wedatasphere.linkis</groupId>
+<artifactId>linkis-engineconn-plugin-core</artifactId>
+<version>${linkis.version}</version>
+</dependency>
+```
+2.The main interfaces of implementing "ECP":
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;a)EngineConnPlugin, when starting "EngineConn", first find the corresponding "EngineConnPlugin" class, and use this as the entry point to obtain the implementation of other core interfaces, which is the main interface that must be implemented.
+    
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b)EngineConnFactory, which implements the logic of how to start an engine connector and how to start an engine executor, is an interface that must be implemented.
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b.a Implement the "createEngineConn" method: return an "EngineConn" object, where "getEngine" returns an object that encapsulates the connection information with the underlying engine, and also contains Engine type information.
+    
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b.b For engines that only support a single computing scenario, inherit "SingleExecutorEngineConnFactory" class and implement "createExecutor" method which returns the corresponding Executor.
+    
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b.c For engines that support multiple computing scenarios, you need to inherit "MultiExecutorEngineConnFactory" and implement an ExecutorFactory for each computing type. "EngineConnPlugin" will obtain all ExecutorFactory through reflection and return the corresponding Executor according to the actual situation.
+    
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;c)EngineConnResourceFactory, it is used to limit the resources required to start an engine. Before the engine starts, it will use this as the basis to apply for resources from the "Linkis Manager". Not required, "GenericEngineResourceFactory" can be used by default.
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;d)EngineLaunchBuilder, it is used to encapsulate the necessary information that "EngineConnManager" can parse into the startup command. Not necessary, you can directly inherit "JavaProcessEngineConnLaunchBuilder".
+
+3.Implement Executor. As a real computing scene executor, Executor is the actual computing logic execution unit. It also abstracts various specific capabilities of the engine and provides various services such as locking, accessing status and obtaining logs. According to actual needs, Linkis provides the following derived Executor base classes by default. The class names and main functions are as follows:
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;a) SensibleExecutor: 
+       
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; i. Executor has multiple states, allowing Executor to switch states.
+         
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ii. After the Executor switches the state, operations such as notifications are allowed. 
+         
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b) YarnExecutor: refers to the Yarn type engine, which can obtain the "applicationId", "applicationURL" and queue。
+       
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;c) ResourceExecutor: refers to the engine's ability to dynamically change resources and cooperate with the "requestExpectedResource" method to apply to RM for new resources each time you want to change resources; And the "resourceUpdate" method is used to request new resources from RM each time the actual resource used by the engine changes:
+       
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;d) AccessibleExecutor: is a very important Executor base class. If the user's Executor inherits the base class, it means that the Engine can be accessed. Here we need to distinguish between "SensibleExecutor"'s "state" method and "AccessibleExecutor"'s "getEngineStatus" method. "state" method is used to get the engine status, and "getEngineStatus" is used to get the basic indicator metric data such as engine status, load and concurrency.
+       
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;e) At the same time, if AccessibleExecutor is inherited, it will trigger the Engine process to instantiate multiple "EngineReceiver" methods. "EngineReceiver" is used to process RPC requests from Entrance, EM and "LinkisMaster", marking the engine an accessible engine. If users have special RPC requirements, they can communicate with "AccessibleExecutor" by implementing the "RPCService" interface. 
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;f) ExecutableExecutor: it is a resident Executor base class. The resident Executor includes: Streaming applications in the production center, steps specified to run in independent mode after submission to "Schedulis", business applications of business users, etc.
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;g) StreamingExecutor: inherited from "ExecutiveExecutor", it needs the ability to diagnose, do checkpoint, collect job information and monitor alarms.
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;h) ComputationExecutor: it is a commonly used interactive engine Executor which handles interactive execution tasks and has interactive capabilities such as status query ad task killing.
+
+             
+## Actual Case         
+The following will take the Hive engine as case to illustrate the implementation of each interface. The following figure is what is needed to implement a Hive engine All core classes implemented.
+
+Hive engine is an interactive engine, so when implementing Executor, it inherits "ComputationExecutor" and introduces the following maven dependencies: 
+
+``` 
+<dependency>
+<groupId>com.webank.wedatasphere.linkis</groupId>
+<artifactId>linkis-computation-engineconn</artifactId>
+<version>${linkis.version}</version>
+</dependency>
+```
+             
+As a subclass of "ComputationExecutor", "HiveEngineConnExecutor" implements the "executeLine" method. This method receives a line of execution statements. After calling the Hive interface for execution, it returns different "ExecuteResponse" to indicate success or failure. At the same time, in this method, through the interface provided in the "engineExecutorContext", the result set, log and progress transmission are realized. 
+
+The Hive engine only needs to execute the HQL Executor, which is a single executor engine. Therefore, when defining "HiveEngineConnFactory", it inherits "SingleExecutorEngineConnFactory" which implements the following two interfaces: 
+a) createEngineConn: creates a object that contains "UserGroupInformation", "SessionState" adn "HiveConf" as an encapsulation of the connection information with the underlying engine, set to the EngineConn object to return.
+b) createExecutor: creates a "HiveEngineConnExecutor" executor object based on the current engine connection information.
+
+Hive engine is an ordinary Java process, so when implementing "EngineConnLaunchBuilder", it directly inherits "JavaProcessEngineConnLaunchBuilder". Like memory size, Java parameters and classPath, it can be adjusted through configuration, please refer to "EnvConfiguration" class for details.
+
+Hive engine uses "LoadInstanceResource resources", so there is no need to implement "EngineResourceFactory", directly use the default "GenericEngineResourceFactory", adjust the number of resources through configuration, refer to "EngineConnPluginConf" class for details.
+
+Implement "HiveEngineConnPlugin" and provide methods for creating the above implementation classes.
+
+
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Hive_User_Manual.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Hive_User_Manual.md
new file mode 100644
index 0000000..8262706
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Hive_User_Manual.md
@@ -0,0 +1,81 @@
+# Hive engine usage documentation
+
+This article mainly introduces the configuration, deployment and use of Hive engine in Linkis1.0.
+
+## 1. Environment configuration before Hive engine use
+
+If you want to use the hive engine on your server, you need to ensure that the following environment variables have been set correctly and that the user who started the engine has these environment variables.
+
+It is strongly recommended that you check these environment variables of the executing user before executing hive tasks.
+
+| Environment variable name | Environment variable content | Remarks |
+|-----------------|----------------|------|
+| JAVA_HOME | JDK installation path | Required |
+| HADOOP_HOME | Hadoop installation path | Required |
+| HADOOP_CONF_DIR | Hadoop configuration path | Required |
+| HIVE_CONF_DIR | Hive configuration path | Required |
+
+Table 1-1 Environmental configuration list
+
+## 2. Hive engine configuration and deployment
+
+### 2.1 Hive version selection and compilation
+
+The version of Hive supports hive1.x and hive2.x, the default is to support hive on MapReduce, if you want to change to Hive
+on Tez, you need to make some changes in accordance with this pr.
+
+<https://github.com/WeBankFinTech/Linkis/pull/541>
+
+The hive version supported by default is 1.2.1. If you want to modify the hive version, such as 2.3.3, you can find the linkis-engineplugin-hive module and change the \<hive.version\> tag to 2.3 .3, then compile this module separately
+
+### 2.2 hive engineConn deployment and loading
+
+If you have already compiled your hive engine plug-in has been compiled, then you need to put the new plug-in in the specified location to load, you can refer to the following article for details
+
+https://github.com/WeBankFinTech/Linkis/wiki/EngineConnPlugin%E5%BC%95%E6%93%8E%E6%8F%92%E4%BB%B6%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3
+
+### 2.3 Hive engine tags
+
+Linkis1.0 is done through tags, so we need to insert data in our database, the way of inserting is shown below.
+
+https://github.com/WeBankFinTech/Linkis/wiki/EngineConnPlugin%E5%BC%95%E6%93%8E%E6%8F%92%E4%BB%B6%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3#22-%E7%AE%A1%E7%90%86%E5%8F%B0configuration%E9%85%8D%E7%BD%AE%E4%BF%AE%E6%94%B9%E5%8F%AF%E9%80%89
+
+## 3. Use of hive engine
+
+### Preparation for operation, queue setting
+
+Hive's MapReduce task requires yarn resources, so you need to set up the queue at the beginning
+
+![](../Images/EngineUsage/queue-set.png)
+
+Figure 3-1 Queue settings
+
+### 3.1 How to use Scriptis
+
+The use of Scriptis is the simplest. You can directly enter Scriptis, right-click the directory and create a new hive script and write hivesql code.
+
+The implementation of the hive engine is by instantiating the driver instance of hive, and then the driver submits the task, and obtains the result set and displays it.
+
+![](../Images/EngineUsage/hive-run.png)
+
+Figure 3-2 Screenshot of the execution effect of hivesql
+
+### 3.2 How to use workflow
+
+DSS workflow also has a hive node, you can drag in the workflow node, then double-click to enter and edit the code, and then execute it in the form of a workflow.
+
+![](../Images/EngineUsage/workflow.png)
+
+Figure 3-5 The node where the workflow executes hive
+
+### 3.3 How to use Linkis Client
+
+Linkis also provides a client method to call hive tasks. The call method is through the SDK provided by LinkisClient. We provide java and scala two ways to call, the specific usage can refer to <https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4 %BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>.
+
+## 4. Hive engine user settings
+
+In addition to the above engine configuration, users can also make custom settings, including the memory size of the hive Driver process, etc.
+
+![](../Images/EngineUsage/hive-config.png)
+
+Figure 4-1 User-defined configuration management console of hive
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/JDBC_User_Manual.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/JDBC_User_Manual.md
new file mode 100644
index 0000000..35f3d7b
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/JDBC_User_Manual.md
@@ -0,0 +1,53 @@
+# JDBC engine usage documentation
+
+This article mainly introduces the configuration, deployment and use of JDBC engine in Linkis1.0.
+
+## 1. Environment configuration before using the JDBC engine
+
+If you want to use the JDBC engine on your server, you need to prepare the JDBC connection information, such as the connection address, user name and password of the MySQL database, etc.
+
+## 2. JDBC engine configuration and deployment
+
+### 2.1 JDBC version selection and compilation
+
+The JDBC engine does not need to be compiled by the user, and the compiled JDBC engine plug-in package can be used directly. Drivers that have been provided include MySQL, PostgreSQL, etc.
+
+### 2.2 JDBC engineConn deployment and loading
+
+Here you can use the default loading method to use it normally, just install it according to the standard version.
+
+### 2.3 JDBC engine tags
+
+Here you can use the default dml.sql to insert it and it can be used normally.
+
+## 3. The use of JDBC engine
+
+### Ready to operate
+
+You need to configure JDBC connection information, including connection address information and user name and password.
+
+![](../Images/EngineUsage/jdbc-conf.png)
+
+Figure 3-1 JDBC configuration information
+
+### 3.1 How to use Scriptis
+
+The way to use Scriptis is the simplest. You can go directly to Scriptis, right-click the directory and create a new JDBC script, write JDBC code and click Execute.
+
+The execution principle of JDBC is to load the JDBC Driver and submit sql to the SQL server for execution and obtain the result set and return.
+
+![](../Images/EngineUsage/jdbc-run.png)
+
+Figure 3-2 Screenshot of the execution effect of JDBC
+
+### 3.2 How to use workflow
+
+DSS workflow also has a JDBC node, you can drag into the workflow node, then double-click to enter and edit the code, and then execute it in the form of a workflow.
+
+### 3.3 How to use Linkis Client
+
+Linkis also provides a client way to call JDBC tasks, the way to call is through the SDK provided by LinkisClient. We provide java and scala two ways to call, the specific usage can refer to <https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4 %BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>.
+
+## 4. JDBC engine user settings
+
+JDBC user settings are mainly JDBC connection information, but it is recommended that users encrypt and manage this password and other information.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Python_User_Manual.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Python_User_Manual.md
new file mode 100644
index 0000000..64724e9
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Python_User_Manual.md
@@ -0,0 +1,61 @@
+# Python engine usage documentation
+
+This article mainly introduces the configuration, deployment and use of the Python engine in Linkis1.0.
+
+## 1. Environment configuration before using Python engine
+
+If you want to use the python engine on your server, you need to ensure that the python execution directory and execution permissions are in the user's PATH.
+
+| Environment variable name | Environment variable content | Remarks |
+|------------|-----------------|--------------------------------|
+| python | python execution environment | Anaconda's python executor is recommended |
+
+Table 1-1 Environmental configuration list
+
+## 2. Python engine configuration and deployment
+
+### 2.1 Python version selection and compilation
+
+Python supports python2 and
+For python3, you can simply change the configuration to complete the Python version switch, without recompiling the python engine version.
+
+### 2.2 python engineConn deployment and loading
+
+Here you can use the default loading method to be used normally.
+
+### 2.3 tags of python engine
+
+Here you can use the default dml.sql to insert it and it can be used normally.
+
+## 3. Use of Python engine
+
+### Ready to operate
+
+Before submitting python on linkis, you only need to make sure that there is python path in your user's PATH.
+
+### 3.1 How to use Scriptis
+
+The way to use Scriptis is the simplest. You can directly enter Scriptis, right-click the directory and create a new python script, write python code and click Execute.
+
+The execution logic of python is to start a python through Py4j
+Gateway, and then the Python engine submits the code to the python executor for execution.
+
+![](../Images/EngineUsage/python-run.png)
+
+Figure 3-1 Screenshot of the execution effect of python
+
+### 3.2 How to use workflow
+
+The DSS workflow also has a python node, you can drag into the workflow node, then double-click to enter and edit the code, and then execute it in the form of a workflow.
+
+### 3.3 How to use Linkis Client
+
+Linkis also provides a client method to call spark tasks, the call method is through the SDK provided by LinkisClient. We provide java and scala two ways to call, the specific usage can refer to <https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4 %BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>.
+
+## 4. Python engine user settings
+
+In addition to the above engine configuration, users can also make custom settings, such as the version of python and some modules that python needs to load.
+
+![](../Images/EngineUsage/jdbc-conf.png)
+
+Figure 4-1 User-defined configuration management console of python
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/README.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/README.md
new file mode 100644
index 0000000..cb9e5ef
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/README.md
@@ -0,0 +1,25 @@
+## 1 Overview
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis, as a powerful computing middleware, can easily interface with different computing engines. By shielding the usage details of different computing engines, it provides a The unified use interface greatly reduces the operation and maintenance cost of deploying and applying Linkis's big data platform. At present, Linkis has docked several mainstream computing engines, which basically cover the data requirements in production, in order t [...]
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The engine is a component that provides users with data processing and analysis capabilities. Currently, it has been connected to Linkis's engine, including mainstream big data computing engines Spark, Hive, Presto, etc. , There are also engines with the ability to process data in scripts such as python and Shell. DataSphereStudio is a one-stop data operation platform docked with Linkis. Users can conveniently use the engine supported by Li [...]
+
+| Engine | Whether to support Scriptis | Whether to support workflow |
+| ---- | ---- | ---- |
+| Spark | Support | Support |
+| Hive | Support | Support |
+| Presto | Support | Support |
+| ElasticSearch | Support | Support |
+| python | support | support |
+| Shell | Support | Support |
+| JDBC | Support | Support |
+| MySQL | Support | Support |
+
+## 2. Document structure
+You can refer to the following documents for the related documents of the engines that have been accessed.
+-[Spark Engine Usage Document](./../Engine_Usage_Documentations/Spark_User_Manual.md)
+-[Hive Engine Usage Document](./../Engine_Usage_Documentations/Hive_User_Manual.md)
+-[Presto Engine Usage Document](./../Engine_Usage_Documentations/Presto_User_Manual.md)
+-[ElasticSearch Engine Usage Document](./../Engine_Usage_Documentations/ElasticSearch_User_Manual.md)
+-[Python engine usage documentation](./../Engine_Usage_Documentations/Python_User_Manual.md)
+-[Shell Engine Usage Document](./../Engine_Usage_Documentations/Shell_User_Manual.md)
+-[JDBC Engine Usage Document](./../Engine_Usage_Documentations/JDBC_User_Manual.md)
+-[MLSQL Engine Usage Document](./../Engine_Usage_Documentations/MLSQL_User_Manual.md)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Shell_User_Manual.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Shell_User_Manual.md
new file mode 100644
index 0000000..292d2c4
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Shell_User_Manual.md
@@ -0,0 +1,55 @@
+# Shell engine usage document
+
+This article mainly introduces the configuration, deployment and use of Shell engine in Linkis1.0
+## 1. The environment configuration before using the Shell engine
+
+If you want to use the shell engine on your server, you need to ensure that the user's PATH has the bash execution directory and execution permissions.
+
+| Environment variable name | Environment variable content | Remarks             |
+|---------------------------|------------------------------|---------------------|
+| sh execution environment  | bash environment variables    | bash is recommended |
+
+Table 1-1 Environmental configuration list
+
+## 2. Shell engine configuration and deployment
+
+### 2.1 Shell version selection and compilation
+
+The shell engine does not need to be compiled by the user, and the compiled shell engine plug-in package can be used directly.
+### 2.2 shell engineConn deployment and loading
+
+Here you can use the default loading method to be used normally.
+
+### 2.3 Labels of the shell engine
+
+Here you can use the default dml.sql to insert it and it can be used normally.
+
+## 3. Use of Shell Engine
+
+### Ready to operate
+
+Before submitting the shell on linkis, you only need to ensure that there is the path of the shell in your user's $PATH.
+
+### 3.1 How to use Scriptis
+
+The use of Scriptis is the simplest. You can directly enter Scriptis, right-click the directory and create a new shell script, write shell code and click Execute.
+
+The execution principle of the shell is that the shell engine starts a system process to execute through the ProcessBuilder that comes with java, and redirects the output of the process to the engine and writes it to the log.
+
+![](../Images/EngineUsage/shell-run.png)
+
+Figure 3-1 Screenshot of shell execution effect
+
+### 3.2 How to use workflow
+
+The DSS workflow also has a shell node. You can drag in the workflow node, then double-click to enter and edit the code, and then execute it in the form of a workflow.
+
+Shell execution needs to pay attention to one point. If the workflow is executed in multiple lines, the success of the workflow node is determined by the last command. For example, the first two lines are wrong, but the shell return value of the last line is 0, then this node Is successful.
+
+### 3.3 How to use Linkis Client
+
+Linkis also provides a client method to call the shell task, the calling method is through the SDK provided by LinkisClient. We provide java and scala two ways to call, the specific usage can refer to <https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4 %BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>.
+
+## 4. Shell engine user settings
+
+The shell engine can generally set the maximum memory of the engine JVM.
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Spark_User_Manual.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Spark_User_Manual.md
new file mode 100644
index 0000000..9932184
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Spark_User_Manual.md
@@ -0,0 +1,91 @@
+# Spark engine usage documentation
+
+This article mainly introduces the configuration, deployment and use of spark engine in Linkis1.0.
+
+## 1. Environment configuration before using Spark engine
+
+If you want to use the spark engine on your server, you need to ensure that the following environment variables have been set correctly and that the user who started the engine has these environment variables.
+
+It is strongly recommended that you check these environment variables of the executing user before executing spark tasks.
+
+| Environment variable name | Environment variable content | Remarks |
+|---------------------------|------------------------------|------|
+| JAVA_HOME | JDK installation path | Required |
+| HADOOP_HOME | Hadoop installation path | Required |
+| HADOOP_CONF_DIR | Hadoop configuration path | Required |
+| HIVE\_CONF_DIR | Hive configuration path | Required |
+| SPARK_HOME | Spark installation path | Required |
+| SPARK_CONF_DIR | Spark configuration path | Required |
+| python | python | Anaconda's python is recommended as the default python |
+
+Table 1-1 Environmental configuration list
+
+## 2. Configuration and deployment of Spark engine
+
+### 2.1 Selection and compilation of spark version
+
+In theory, Linkis1.0 supports all versions of spark2.x and above. Spark 2.4.3 is the default supported version. If you want to use your spark version, such as spark2.1.0, you only need to modify the version of the plug-in spark and then compile it. Specifically, you can find the linkis-engineplugin-spark module, change the \<spark.version\> tag to 2.1.0, and then compile this module separately.
+
+### 2.2 spark engineConn deployment and loading
+
+If you have already compiled your spark engine plug-in has been compiled, then you need to put the new plug-in to the specified location to load, you can refer to the following article for details
+
+https://github.com/WeBankFinTech/Linkis/wiki/EngineConnPlugin%E5%BC%95%E6%93%8E%E6%8F%92%E4%BB%B6%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3
+
+### 2.3 tags of spark engine
+
+Linkis1.0 is done through tags, so we need to insert data in our database, the way of inserting is shown below.
+
+https://github.com/WeBankFinTech/Linkis/wiki/EngineConnPlugin%E5%BC%95%E6%93%8E%E6%8F%92%E4%BB%B6%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3\#22-%E7%AE%A1%E7%90%86%E5%8F%B0configuration%E9%85%8D%E7%BD%AE%E4%BF%AE%E6%94%B9%E5%8F%AF%E9%80%89
+
+## 3. Use of spark engine
+
+### Preparation for operation, queue setting
+
+Because the execution of spark is a resource that requires a queue, the user must set up a queue that he can execute before executing.
+
+![](../Images/EngineUsage/queue-set.png)
+
+Figure 3-1 Queue settings
+
+### 3.1 How to use Scriptis
+
+The use of Scriptis is the simplest. You can directly enter Scriptis and create a new sql, scala or pyspark script for execution.
+
+The sql method is the simplest. You can create a new sql script and write and execute it. When it is executed, the progress will be displayed. If the user does not have a spark engine at the beginning, the execution of sql will start a spark session (it may take some time here),
+After the SparkSession is initialized, you can start to execute sql.
+
+![](../Images/EngineUsage/sparksql-run.png)
+
+Figure 3-2 Screenshot of the execution effect of sparksql
+
+For spark-scala tasks, we have initialized sqlContext and other variables, and users can directly use this sqlContext to execute sql.
+
+![](../Images/EngineUsage/scala-run.png)
+
+Figure 3-3 Execution effect diagram of spark-scala
+
+Similarly, in the way of pyspark, we have also initialized the SparkSession, and users can directly use spark.sql to execute SQL.
+
+![](../Images/EngineUsage/pyspakr-run.png)
+Figure 3-4 pyspark execution mode
+
+### 3.2 How to use workflow
+
+DSS workflow also has three spark nodes. You can drag in workflow nodes, such as sql, scala or pyspark nodes, and then double-click to enter and edit the code, and then execute in the form of workflow.
+
+![](../Images/EngineUsage/workflow.png)
+
+Figure 3-5 The node where the workflow executes spark
+
+### 3.3 How to use Linkis Client
+
+Linkis also provides a client method to call spark tasks, the call method is through the SDK provided by LinkisClient. We provide java and scala two ways to call, the specific usage can refer to <https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4 %BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>.
+
+## 4. Spark engine user settings
+
+In addition to the above engine configuration, users can also make custom settings, such as the number of spark session executors and the memory of the executors. These parameters are for users to set their own spark parameters more freely, and other spark parameters can also be modified, such as the python version of pyspark.
+
+![](../Images/EngineUsage/spark-conf.png)
+
+Figure 4-1 Spark user-defined configuration management console
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png b/Linkis-Doc-master/en_US/Images/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png
new file mode 100644
index 0000000..2e71b42
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/EngineConn/engineconn-01.png b/Linkis-Doc-master/en_US/Images/Architecture/EngineConn/engineconn-01.png
new file mode 100644
index 0000000..d95da89
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/EngineConn/engineconn-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_dispatcher.png b/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_dispatcher.png
new file mode 100644
index 0000000..9cdc918
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_dispatcher.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_global.png b/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_global.png
new file mode 100644
index 0000000..584574e
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_global.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gatway_websocket.png b/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gatway_websocket.png
new file mode 100644
index 0000000..fcac318
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gatway_websocket.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/execution.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/execution.png
new file mode 100644
index 0000000..1abc43b
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/execution.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png
new file mode 100644
index 0000000..9de0a5d
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/overall.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/overall.png
new file mode 100644
index 0000000..68b5e19
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/overall.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png
new file mode 100644
index 0000000..7998704
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png
new file mode 100644
index 0000000..c2dd9f3
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/submission.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/submission.png
new file mode 100644
index 0000000..f6bd9a9
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/submission.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_builder.png b/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_builder.png
new file mode 100644
index 0000000..4896981
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_builder.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_global.png b/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_global.png
new file mode 100644
index 0000000..ca4151a
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_global.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_scorer.png b/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_scorer.png
new file mode 100644
index 0000000..7213b0b
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_scorer.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-NewEngine-architecture.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-NewEngine-architecture.png
new file mode 100644
index 0000000..57c83b3
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-NewEngine-architecture.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-services-list.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-services-list.png
new file mode 100644
index 0000000..c669abf
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-services-list.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-EngineConn-architecture.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-EngineConn-architecture.png
new file mode 100644
index 0000000..d95da89
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-EngineConn-architecture.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-NewEngine-architecture.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-NewEngine-architecture.png
new file mode 100644
index 0000000..b1d60bf
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-NewEngine-architecture.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-architecture.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-architecture.png
new file mode 100644
index 0000000..825672b
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-architecture.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-newEngine-initialization.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-newEngine-initialization.png
new file mode 100644
index 0000000..003b38e
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-newEngine-initialization.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-services-list.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-services-list.png
new file mode 100644
index 0000000..f768545
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-services-list.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/PublicEnhencementArchitecture.png b/Linkis-Doc-master/en_US/Images/Architecture/PublicEnhencementArchitecture.png
new file mode 100644
index 0000000..bcf72a5
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/PublicEnhencementArchitecture.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png
new file mode 100644
index 0000000..f61c49a
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png
new file mode 100644
index 0000000..a2e1022
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png
new file mode 100644
index 0000000..5f4272f
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png
new file mode 100644
index 0000000..9bb177a
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png
new file mode 100644
index 0000000..00d1f4a
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png
new file mode 100644
index 0000000..439c8e2
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png
new file mode 100644
index 0000000..081d514
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png
new file mode 100644
index 0000000..e343579
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png
new file mode 100644
index 0000000..012eb65
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png
new file mode 100644
index 0000000..c3a43b9
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png
new file mode 100644
index 0000000..719599a
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png
new file mode 100644
index 0000000..2277a70
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png
new file mode 100644
index 0000000..df58d96
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png
new file mode 100644
index 0000000..1e13445
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png
new file mode 100644
index 0000000..7e410fb
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png
new file mode 100644
index 0000000..097b7f1
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png
new file mode 100644
index 0000000..7a4d462
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png
new file mode 100644
index 0000000..fdd6623
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png
new file mode 100644
index 0000000..b366462
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png
new file mode 100644
index 0000000..2a1e403
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png
new file mode 100644
index 0000000..32336eb
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png
new file mode 100644
index 0000000..fdb60fc
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png
new file mode 100644
index 0000000..45dcc43
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png
new file mode 100644
index 0000000..2175704
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png
new file mode 100644
index 0000000..9d357af
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png
new file mode 100644
index 0000000..b08efd3
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png
new file mode 100644
index 0000000..13ca37e
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png
new file mode 100644
index 0000000..36a4d96
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png
new file mode 100644
index 0000000..0a5ae1d
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/bml-02.png b/Linkis-Doc-master/en_US/Images/Architecture/bml-02.png
new file mode 100644
index 0000000..fed79f7
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/bml-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-engineConnPlugin-01.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-engineConnPlugin-01.png
new file mode 100644
index 0000000..2d2d134
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/linkis-engineConnPlugin-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-01.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-01.png
new file mode 100644
index 0000000..60b575d
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-02.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-02.png
new file mode 100644
index 0000000..a31e681
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-01.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-01.png
new file mode 100644
index 0000000..ac46424
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-03.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-03.png
new file mode 100644
index 0000000..b53c8e1
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-publicService-01.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-publicService-01.png
new file mode 100644
index 0000000..d503573
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/linkis-publicService-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/hive-config.png b/Linkis-Doc-master/en_US/Images/EngineUsage/hive-config.png
new file mode 100644
index 0000000..9b3df01
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/hive-config.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/hive-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/hive-run.png
new file mode 100644
index 0000000..287b1ab
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/hive-run.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-conf.png b/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-conf.png
new file mode 100644
index 0000000..39397d3
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-conf.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-run.png
new file mode 100644
index 0000000..fe51598
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-run.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/pyspakr-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/pyspakr-run.png
new file mode 100644
index 0000000..c80c85b
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/pyspakr-run.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/python-config.png b/Linkis-Doc-master/en_US/Images/EngineUsage/python-config.png
new file mode 100644
index 0000000..2bf1791
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/python-config.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/python-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/python-run.png
new file mode 100644
index 0000000..65467af
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/python-run.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/queue-set.png b/Linkis-Doc-master/en_US/Images/EngineUsage/queue-set.png
new file mode 100644
index 0000000..735a670
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/queue-set.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/scala-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/scala-run.png
new file mode 100644
index 0000000..7c01aad
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/scala-run.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/shell-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/shell-run.png
new file mode 100644
index 0000000..734bdb2
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/shell-run.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/spark-conf.png b/Linkis-Doc-master/en_US/Images/EngineUsage/spark-conf.png
new file mode 100644
index 0000000..353dbd6
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/spark-conf.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/sparksql-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/sparksql-run.png
new file mode 100644
index 0000000..f0b1d1b
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/sparksql-run.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/workflow.png b/Linkis-Doc-master/en_US/Images/EngineUsage/workflow.png
new file mode 100644
index 0000000..3a5919f
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/workflow.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Linkis_1.0_architecture.png b/Linkis-Doc-master/en_US/Images/Linkis_1.0_architecture.png
new file mode 100644
index 0000000..9b6cc90
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Linkis_1.0_architecture.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/Q&A.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/Q&A.png
new file mode 100644
index 0000000..121d7f3
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/Q&A.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/code-fix-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/code-fix-01.png
new file mode 100644
index 0000000..27bdddb
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/code-fix-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-01.png
new file mode 100644
index 0000000..fa1f1c8
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-02.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-02.png
new file mode 100644
index 0000000..c2f8443
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-01.png
new file mode 100644
index 0000000..9834b3d
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-02.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-02.png
new file mode 100644
index 0000000..c7621b5
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-03.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-03.png
new file mode 100644
index 0000000..16788c3
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-04.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-04.png
new file mode 100644
index 0000000..cb944ee
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-04.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-05.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-05.png
new file mode 100644
index 0000000..2c5972c
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-05.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-06.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-06.png
new file mode 100644
index 0000000..a64cec6
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-06.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-07.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-07.png
new file mode 100644
index 0000000..935d5bc
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-07.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-08.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-08.png
new file mode 100644
index 0000000..d2a3328
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-08.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/hive-config-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/hive-config-01.png
new file mode 100644
index 0000000..6bd0edb
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/hive-config-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-01.png
new file mode 100644
index 0000000..01090d1
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-02.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-02.png
new file mode 100644
index 0000000..0f68f12
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-03.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-03.png
new file mode 100644
index 0000000..8fb4464
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-04.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-04.png
new file mode 100644
index 0000000..5635a20
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-04.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-05.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-05.png
new file mode 100644
index 0000000..c341a9d
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-05.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-06.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-06.png
new file mode 100644
index 0000000..b0624ef
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-06.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-07.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-07.png
new file mode 100644
index 0000000..402f0c9
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-07.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-08.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-08.png
new file mode 100644
index 0000000..27c1824
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-08.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-09.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-09.png
new file mode 100644
index 0000000..5b27b4b
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-09.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-10.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-10.png
new file mode 100644
index 0000000..7c361e7
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-10.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-01.png
new file mode 100644
index 0000000..d953cb6
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-02.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-02.png
new file mode 100644
index 0000000..af273bb
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-03.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-03.png
new file mode 100644
index 0000000..c36bb30
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/searching_keywords.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/searching_keywords.png
new file mode 100644
index 0000000..cada716
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/searching_keywords.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-01.png
new file mode 100644
index 0000000..910150e
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-02.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-02.png
new file mode 100644
index 0000000..71d5e7e
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-03.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-03.png
new file mode 100644
index 0000000..4bb9cfe
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-04.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-04.png
new file mode 100644
index 0000000..c2df857
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-04.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-05.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-05.png
new file mode 100644
index 0000000..3635584
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-05.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-01.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-01.png
new file mode 100644
index 0000000..9834b3d
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-02.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-02.png
new file mode 100644
index 0000000..c7621b5
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-03.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-03.png
new file mode 100644
index 0000000..16788c3
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-04.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-04.png
new file mode 100644
index 0000000..cb944ee
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-04.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-05.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-05.png
new file mode 100644
index 0000000..2c5972c
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-05.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-06.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-06.png
new file mode 100644
index 0000000..a64cec6
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-06.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-07.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-07.png
new file mode 100644
index 0000000..935d5bc
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-07.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-08.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-08.png
new file mode 100644
index 0000000..d2a3328
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-08.png differ
diff --git a/Linkis-Doc-master/en_US/Images/deployment/Linkis1.0_combined_eureka.png b/Linkis-Doc-master/en_US/Images/deployment/Linkis1.0_combined_eureka.png
new file mode 100644
index 0000000..809dbee
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/deployment/Linkis1.0_combined_eureka.png differ
diff --git a/Linkis-Doc-master/en_US/Images/wedatasphere_contact_01.png b/Linkis-Doc-master/en_US/Images/wedatasphere_contact_01.png
new file mode 100644
index 0000000..5a3d80e
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/wedatasphere_contact_01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/wedatasphere_stack_Linkis.png b/Linkis-Doc-master/en_US/Images/wedatasphere_stack_Linkis.png
new file mode 100644
index 0000000..36060b9
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/wedatasphere_stack_Linkis.png differ
diff --git a/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Configuration.md b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Configuration.md
new file mode 100644
index 0000000..c4652ea
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Configuration.md
@@ -0,0 +1,217 @@
+# Linkis1.0 Configurations
+
+> The configuration of Linkis1.0 is simplified on the basis of Linkis0.x. A public configuration file linkis.properties is provided in the conf directory to avoid the need for common configuration parameters to be configured in multiple microservices at the same time. This document will list the parameters of Linkis1.0 in modules.
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Please be noticed: This article only lists all the configuration parameters related to Linkis that have an impact on operating performance or environment dependence. Many configuration parameters that do not need users to care about have been omitted. If users are interested, they can browse through the source code.
+
+### 1 General configuration
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The general configuration can be set in the global linkis.properties, one setting, each microservice can take effect.
+
+#### 1.1 Global configurations
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.encoding | utf-8 | Linkis default encoding format |
+| wds.linkis.date.pattern | yyyy-MM-dd'T'HH:mm:ssZ | Default date format |
+| wds.linkis.test.mode | false | Whether to enable debugging mode, if set to true, all microservices support password-free login, and all EngineConn open remote debugging ports |
+| wds.linkis.test.user | None | When wds.linkis.test.mode=true, the default login user for password-free login |
+| wds.linkis.home | /appcom/Install/LinkisInstall | Linkis installation directory, if it does not exist, it will automatically get the value of LINKIS_HOME |
+| wds.linkis.httpclient.default.connect.timeOut | 50000 | Linkis HttpClient default connection timeout |
+
+#### 1.2 LDAP configurations
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.ldap.proxy.url | None | LDAP URL address |
+| wds.linkis.ldap.proxy.baseDN | None | LDAP baseDN address |
+| wds.linkis.ldap.proxy.userNameFormat | None | |
+
+#### 1.3 Hadoop configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.hadoop.root.user | hadoop | HDFS super user |
+| wds.linkis.filesystem.hdfs.root.path | None | User's HDFS default root path |
+| wds.linkis.keytab.enable | false | Whether to enable kerberos |
+| wds.linkis.keytab.file | /appcom/keytab | Kerberos keytab path, effective only when wds.linkis.keytab.enable=true |
+| wds.linkis.keytab.host.enabled | false | |
+| wds.linkis.keytab.host | 127.0.0.1 | |
+| hadoop.config.dir | None | If not configured, it will be read from the environment variable HADOOP_CONF_DIR |
+| wds.linkis.hadoop.external.conf.dir.prefix | /appcom/config/external-conf/hadoop | hadoop additional configuration |
+
+#### 1.4 Linkis RPC configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.rpc.broadcast.thread.num | 10 | Linkis RPC broadcast thread number (**Recommended default value**) |
+| wds.linkis.ms.rpc.sync.timeout | 60000 | Linkis RPC Receiver's default processing timeout time |
+| wds.linkis.rpc.eureka.client.refresh.interval | 1s | Refresh interval of Eureka client's microservice list (**Recommended default value**) |
+| wds.linkis.rpc.eureka.client.refresh.wait.time.max | 1m | Refresh maximum waiting time (**recommended default value**) |
+| wds.linkis.rpc.receiver.asyn.consumer.thread.max | 10 | Maximum number of Receiver Consumer threads (**If there are many online users, it is recommended to increase this parameter appropriately**) |
+| wds.linkis.rpc.receiver.asyn.consumer.freeTime.max | 2m | Receiver Consumer maximum idle time |
+| wds.linkis.rpc.receiver.asyn.queue.size.max | 1000 | The maximum number of buffers in the receiver consumption queue (**If there are many online users, it is recommended to increase this parameter appropriately**) |
+| wds.linkis.rpc.sender.asyn.consumer.thread.max", 5 | Sender Consumer maximum number of threads |
+| wds.linkis.rpc.sender.asyn.consumer.freeTime.max | 2m | Sender Consumer Maximum Free Time |
+| wds.linkis.rpc.sender.asyn.queue.size.max | 300 | Sender consumption queue maximum buffer number |
+
+### 2. Calculate governance configuration parameters
+
+#### 2.1 Entrance configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.spark.engine.version | 2.4.3 | The default Spark version used when the user submits a script without specifying a version |
+| wds.linkis.hive.engine.version | 1.2.1 | The default Hive version used when the user submits a script without a specified version |
+| wds.linkis.python.engine.version | python2 | The default Python version used when the user submits a script without specifying a version |
+| wds.linkis.jdbc.engine.version | 4 | The default JDBC version used when the user submits the script without specifying the version |
+| wds.linkis.shell.engine.version | 1 | The default shell version used when the user submits a script without specifying a version |
+| wds.linkis.appconn.engine.version | v1 | The default AppConn version used when the user submits a script without a specified version |
+| wds.linkis.entrance.scheduler.maxParallelismUsers | 1000 | Maximum number of concurrent users supported by Entrance |
+| wds.linkis.entrance.job.persist.wait.max | 5m | Maximum time for Entrance to wait for JobHistory to persist a Job |
+| wds.linkis.entrance.config.log.path | None | If not configured, the value of wds.linkis.filesystem.hdfs.root.path is used by default |
+| wds.linkis.default.requestApplication.name | IDE | The default submission system when the submission system is not specified |
+| wds.linkis.default.runType | sql | The default script type when the script type is not specified |
+| wds.linkis.warn.log.exclude | org.apache,hive.ql,hive.metastore,com.netflix,com.webank.wedatasphere | Real-time WARN-level logs that are not output to the client by default |
+| wds.linkis.log.exclude | org.apache, hive.ql, hive.metastore, com.netflix, com.webank.wedatasphere, com.webank | Real-time INFO-level logs that are not output to the client by default |
+| wds.linkis.instance | 3 | User's default number of concurrent jobs per engine |
+| wds.linkis.max.ask.executor.time | 5m | Apply to LinkisManager for the maximum time available for EngineConn |
+| wds.linkis.hive.special.log.include | org.apache.hadoop.hive.ql.exec.Task | When pushing Hive logs to the client, which logs are not filtered by default |
+| wds.linkis.spark.special.log.include | com.webank.wedatasphere.linkis.engine.spark.utils.JobProgressUtil | When pushing Spark logs to the client, which logs are not filtered by default |
+| wds.linkis.entrance.shell.danger.check.enabled | false | Whether to check and block dangerous shell syntax |
+| wds.linkis.shell.danger.usage | rm,sh,find,kill,python,for,source,hdfs,hadoop,spark-sql,spark-submit,pyspark,spark-shell,hive,yarn | Shell default Dangerous grammar |
+| wds.linkis.shell.white.usage | cd,ls | Shell whitelist syntax |
+| wds.linkis.sql.default.limit | 5000 | SQL default maximum return result set rows |
+
+
+#### 2.2 EngineConn configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.engineconn.resultSet.default.store.path | hdfs:///tmp | Job result set default storage path |
+| wds.linkis.engine.resultSet.cache.max | 0k | When the size of the result set is lower than how much, EngineConn will return to Entrance without placing the disk. |
+| wds.linkis.engine.default.limit | 5000 | |
+| wds.linkis.engine.lock.expire.time | 120000 | The maximum idle time of the engine lock, that is, after Entrance applies for the lock, how long does it take to submit code to EngineConn will be released |
+| wds.linkis.engineconn.ignore.words | org.apache.spark.deploy.yarn.Client | Logs that are ignored by default when the Engine pushes logs to the Entrance side |
+| wds.linkis.engineconn.pass.words | org.apache.hadoop.hive.ql.exec.Task | The log that must be pushed by default when the Engine pushes logs to the Entrance side |
+| wds.linkis.engineconn.heartbeat.time | 3m | Default heartbeat interval from EngineConn to LinkisManager |
+| wds.linkis.engineconn.max.free.time | 1h | EngineConn's maximum free time |
+
+
+#### 2.3 EngineConnManager configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.ecm.memory.max | 80g | ECM's maximum bootable EngineConn memory |
+| wds.linkis.ecm.cores.max | 50 | ECM's maximum number of CPUs that can start EngineConn |
+| wds.linkis.ecm.engineconn.instances.max | 50 | The maximum number of EngineConn that can be started, it is generally recommended to set the same as wds.linkis.ecm.cores.max |
+| wds.linkis.ecm.protected.memory | 4g | ECM protected memory, that is, the memory used by ECM to start EngineConn cannot exceed wds.linkis.ecm.memory.max-wds.linkis.ecm.protected.memory |
+| wds.linkis.ecm.protected.cores.max | 2 | The number of protected CPUs of ECM, the meaning is the same as wds.linkis.ecm.protected.memory |
+| wds.linkis.ecm.protected.engine.instances | 2 | Number of protected instances of ECM |
+| wds.linkis.engineconn.wait.callback.pid | 3s | Waiting time for EngineConn to return pid |
+
+#### 2.4 LinkisManager configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.manager.am.engine.start.max.time" | 10m | The maximum start time for LinkisManager to start a new EngineConn |
+| wds.linkis.manager.am.engine.reuse.max.time | 5m | LinkisManager reuses an existing EngineConn's maximum selection time |
+| wds.linkis.manager.am.engine.reuse.count.limit | 10 | LinkisManager reuses an existing EngineConn's maximum polling times |
+| wds.linkis.multi.user.engine.types | jdbc,es,presto | When LinkisManager reuses an existing EngineConn, which engine users are not used as reuse rules |
+| wds.linkis.rm.instance | 10 | The default maximum number of instances per user per engine |
+| wds.linkis.rm.yarnqueue.cores.max | 150 | Maximum number of cores per user in each engine usage queue |
+| wds.linkis.rm.yarnqueue.memory.max | 450g | The maximum amount of memory per user in each engine's use queue |
+| wds.linkis.rm.yarnqueue.instance.max | 30 | The maximum number of applications launched by each user in the queue of each engine |
+
+### 3. Each engine configuration parameter
+
+#### 3.1 JDBC engine configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.jdbc.default.limit | 5000 | The default maximum return result set rows |
+| wds.linkis.jdbc.support.dbs | mysql=>com.mysql.jdbc.Driver,postgresql=>org.postgresql.Driver,oracle=>oracle.jdbc.driver.OracleDriver,hive2=>org.apache.hive .jdbc.HiveDriver,presto=>com.facebook.presto.jdbc.PrestoDriver | Drivers supported by JDBC engine |
+| wds.linkis.engineconn.jdbc.concurrent.limit | 100 | Maximum number of concurrent SQL executions |
+
+
+#### 3.2 Python engine configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| pythonVersion | /appcom/Install/anaconda3/bin/python | Python command path |
+| python.path | None | Specify an additional path for Python, which only accepts shared storage paths |
+
+#### 3.3 Spark engine configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.engine.spark.language-repl.init.time | 30s | Maximum initialization time for Scala and Python command interpreters |
+| PYSPARK_DRIVER_PYTHON | python | Python command path |
+| wds.linkis.server.spark-submit | spark-submit | spark-submit command path |
+
+### 4. PublicEnhancements configuration parameters
+
+#### 4.1 BML configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.bml.dws.version | v1 | Version number requested by Linkis Restful |
+| wds.linkis.bml.auth.token.key | Validation-Code | Password-free token-key for BML request |
+| wds.linkis.bml.auth.token.value | BML-AUTH | Password-free token-value requested by BML |
+| wds.linkis.bml.hdfs.prefix | /tmp/linkis | The prefix file path of the BML file stored on hdfs |
+ 
+#### 4.2 Metadata configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| hadoop.config.dir | /appcom/config/hadoop-config | If it does not exist, the value of the environment variable HADOOP_CONF_DIR is used by default |
+| hive.config.dir | /appcom/config/hive-config | If it does not exist, the value of the environment variable HIVE_CONF_DIR is used by default |
+| hive.meta.url | None | The URL of the HiveMetaStore database. If hive.config.dir is not configured, this value must be configured |
+| hive.meta.user | None | User of the HiveMetaStore database |
+| hive.meta.password | None | Password of the HiveMetaStore database |
+
+
+#### 4.3 JobHistory configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.jobhistory.admin | None | The default Admin account is used to specify which users can view the execution history of everyone |
+
+
+#### 4.4 FileSystem configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.filesystem.root.path | file:///tmp/linkis/ | User's Linux local root directory |
+| wds.linkis.filesystem.hdfs.root.path | hdfs:///tmp/ | User's HDFS root directory |
+| wds.linkis.workspace.filesystem.hdfsuserrootpath.suffix | /linkis/ | The first-level prefix after the user's HDFS root directory. The user's actual root directory is: ${hdfs.root.path}\${user}\${ hdfsuserrootpath.suffix} |
+| wds.linkis.workspace.resultset.download.is.limit | true | When Client downloads the result set, whether to limit the number of downloads |
+| wds.linkis.workspace.resultset.download.maxsize.csv | 5000 | When the result set is downloaded as a CSV file, the number of downloads is limited |
+| wds.linkis.workspace.resultset.download.maxsize.excel | 5000 | When the result set is downloaded as an Excel file, the number of downloads is limited |
+| wds.linkis.workspace.filesystem.get.timeout | 2000L | The maximum timeout period for requesting the underlying file system. (**If the performance of your HDFS or Linux machine is low, it is recommended to increase the check number appropriately**) |
+
+#### 4.5 UDF configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.udf.share.path | /mnt/bdap/udf | The storage path of the shared UDF, it is recommended to set it to the HDFS path |
+
+### 5. MicroService configuration parameters
+
+#### 5.1 Gateway configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.gateway.conf.enable.proxy.user | false | Whether to enable proxy user mode, if enabled, the login user’s request will be proxied to the proxy user for execution |
+| wds.linkis.gateway.conf.proxy.user.config | proxy.properties | Storage file of proxy rules |
+| wds.linkis.gateway.conf.proxy.user.scan.interval | 600000 | Proxy file refresh interval |
+| wds.linkis.gateway.conf.enable.token.auth | false | Whether to enable the Token login mode, if enabled, allow access to Linkis in the form of tokens |
+| wds.linkis.gateway.conf.token.auth.config | token.properties | Token rule storage file |
+| wds.linkis.gateway.conf.token.auth.scan.interval | 600000 | Token file refresh interval |
+| wds.linkis.gateway.conf.url.pass.auth | /dws/ | Request for default release without login verification |
+| wds.linkis.gateway.conf.enable.sso | false | Whether to enable SSO user login mode |
+| wds.linkis.gateway.conf.sso.interceptor | None | If the SSO login mode is enabled, the user needs to implement SSOInterceptor to jump to the SSO login page |
+| wds.linkis.admin.user | hadoop | Administrator user list |
+| wds.linkis.login_encrypt.enable | false | When the user logs in, does the password enable RSA encryption transmission |
+| wds.linkis.enable.gateway.auth | false | Whether to enable the Gateway IP whitelist mechanism |
+| wds.linkis.gateway.auth.file | auth.txt | IP whitelist storage file |
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Q&A.md b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Q&A.md
new file mode 100644
index 0000000..c78f440
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Q&A.md
@@ -0,0 +1,255 @@
+#### Q1, linkis startup error: NoSuchMethodErrorgetSessionManager()Lorg/eclipse/jetty/server/SessionManager
+
+Specific stack:
+```
+Failed startup of context osbwejJettyEmbeddedWebAppContext@6c6919ff{application,/,[file:///tmp/jetty-docbase.9102.6375358926927953589/],UNAVAILABLE} java.lang.NoSuchMethodError: org.eclipse.jetty.server.session.SessionHandler.getSessionManager ()Lorg/eclipse/jetty/server/SessionManager;
+at org.eclipse.jetty.servlet.ServletContextHandler\$Context.getSessionCookieConfig(ServletContextHandler.java:1415) ~[jetty-servlet-9.3.20.v20170531.jar:9.3.20.v20170531]
+```
+
+Solution: jetty-servlet and jetty-security versions need to be upgraded from 9.3.20 to 9.4.20;
+
+#### Q2. When starting the microservice linkis-ps-cs, report DebuggClassWriter overrides final method visit
+
+Specific exception stack:
+
+![linkis-exception-01.png](../Images/Tuning_and_Troubleshooting/linkis-exception-01.png)
+
+Solution: jar package conflict, delete asm-5.0.4.jar;
+
+#### Q3. When starting the microservice linkis-ps-datasource, JdbcUtils.getDriverClassName NPE
+
+Specific exception stack:
+
+![linkis-exception-02.png](../Images/Tuning_and_Troubleshooting/linkis-exception-02.png)
+
+
+Solution: caused by the Linkis-datasource configuration problem, modify the three parameters at the beginning of linkis.properties hive.meta:
+
+![hive-config-01.png](../Images/Tuning_and_Troubleshooting/hive-config-01.png)
+
+
+#### Q4. When starting the microservice linkis-ps-datasource, the following exception ClassNotFoundException HttpClient is reported:
+
+Specific exception stack:
+
+![linkis-exception-03.png](../Images/Tuning_and_Troubleshooting/linkis-exception-03.png)
+
+Solution: There is a problem with linkis-metadata-dev-1.0.0.jar compiled in 1.0, and it needs to be recompiled and packaged.
+
+#### Q5. Click scriptis-database, no data is returned, the phenomenon is as follows:
+
+![page-show-01.png](../Images/Tuning_and_Troubleshooting/page-show-01.png)
+
+Solution: The reason is that hive is not authorized to Hadoop users. The authorization data is as follows:
+
+![db-config-01.png](../Images/Tuning_and_Troubleshooting/db-config-01.png)
+
+#### Q6, shell engine scheduling execution, the page reports Insufficient resource, requesting available engine timeout, eneningeconnmanager linkis.out, and the following error is reported:
+
+![linkis-exception-04.png](../Images/Tuning_and_Troubleshooting/linkis-exception-04.png)
+
+Solution: The reason Hadoop did not create /appcom/tmp/hadoop/workDir. Create it in advance through the root user, and then authorize the Hadoop user.
+
+#### Q7. When the shell engine is scheduled for execution, the engine execution directory reports the following error /bin/java: No such file or directory:
+
+![shell-error-01.png](../Images/Tuning_and_Troubleshooting/shell-error-01.png)
+
+Solution: There is a problem with the local java environment variables, and you need to make a symbolic link to the java command.
+
+#### Q8, hive engine scheduling, the following error is reported EngineConnPluginNotFoundException:errorCode:70063
+
+![linkis-exception-05.png](../Images/Tuning_and_Troubleshooting/linkis-exception-05.png)
+
+Solution: It is caused by not modifying the version of the corresponding engine during installation, so the engine type inserted into the db by default is the default version, and the compiled version is not caused by the default version. Specific modification steps: cd /appcom/Install/dss-linkis/linkis/lib/linkis-engineconn-plugins/, modify the v2.1.1 directory name in the dist directory to v1.2.1 modify the subdirectory name in the plugin directory 2.1. 1 is 1.2.1 of the default versio [...]
+
+#### Q9. After the linkis microservice is started, the following error is reported: Load balancer does not have available server for client:
+
+![page-show-02.png](../Images/Tuning_and_Troubleshooting/page-show-02.png)
+
+Solution: This is because the linkis microservice has just started and the registration has not been completed. Wait for 1~2 minutes and try again.
+
+#### Q10. When the hive engine is scheduled for execution, the following error is reported: operation failed NullPointerException:
+
+![linkis-exception-06.png](../Images/Tuning_and_Troubleshooting/linkis-exception-06.png)
+
+
+Solution: The server lacks environment variables, add export HIVE_CONF_DIR=/etc/hive/conf in /etc/profile;
+
+#### Q11. When hive engine is scheduled, the error log of engineConnManager is as follows method did not exist: SessionHandler:
+
+![linkis-exception-07.png](../Images/Tuning_and_Troubleshooting/linkis-exception-07.png)
+
+Solution: Under the hive engine lib, the jetty jar package conflicts, replace jetty-security and jetty-server with 9.4.20;
+
+#### After Q12, hive engine restarts, the jar package of jetty 9.4 is always replaced by 9.3
+
+Solution: When the engine instance is generated, there will be a jar package cache. First, you need to delete the records related to the table linkis_engine_conn_plugin_bml_resources hive, and then delete the records under the directory /appcom/Install/dss-linkis/linkis/lib/linkis-engineconn-plugins/hive/dist 1.2.1.zip, finally restart the engineplugin service, the jar package of lib will be updated successfully.
+
+#### Q13. When the hive engine is executed, the following error is reported: Lcom/google/common/collect/UnmodifiableIterator:
+
+```
+2021-03-16 13:32:23.304 ERROR [pool-2-thread-1] com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor 140 run-query failed, reason: java.lang.IllegalAccessError: tried to access method com.google.common.collect.Iterators.emptyIterator() Lcom/google/common/collect/UnmodifiableIterator; from class org.apache.hadoop.hive.ql.exec.FetchOperator
+at org.apache.hadoop.hive.ql.exec.FetchOperator.<init>(FetchOperator.java:108) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.exec.FetchTask.initialize(FetchTask.java:86) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:629) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1414) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1543) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1332) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1321) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor$$anon$1.run(HiveEngineConnExecutor.scala:152) [linkis-engineplugin-hive-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor$$anon$1.run(HiveEngineConnExecutor.scala:126) [linkis-engineplugin-hive-dev-1.0.0.jar:?]
+```
+
+Solution: guava package conflict, kill guava-25.1-jre.jar under hive/dist/v1.2.1/lib;
+
+#### Q14. When the hive engine is executed, the error is reported as follows: TaskExecutionServiceImpl 59 error-org/apache/curator/connection/ConnectionHandlingPolicy:
+
+```
+2021-03-16 16:17:40.649 INFO [pool-2-thread-1] com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor 42 info-com.webank.wedatasphere.linkis.engineplugin.hive. executor.HiveEngineConnExecutor@36a7c96f change status Busy => Idle.
+2021-03-16 16:17:40.661 ERROR [pool-2-thread-1] com.webank.wedatasphere.linkis.engineconn.computation.executor.service.TaskExecutionServiceImpl 59 error-org/apache/curator/connection/ConnectionHandlingPolicy java .lang.NoClassDefFoundError: org/apache/curator/connection/ConnectionHandlingPolicy at org.apache.curator.framework.CuratorFrameworkFactory.builder(CuratorFrameworkFactory.java:78) ~[curator-framework-4.0.1.jar:4.0.1]
+at org.apache.hadoop.hive.ql.lockmgr.zookeeper.CuratorFrameworkSingleton.getInstance(CuratorFrameworkSingleton.java:59) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.setContext(ZooKeeperHiveLockManager.java:98) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager.getLockManager(DummyTxnManager.java:87) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager.acquireLocks(DummyTxnManager.java:121) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.acquireLocksAndOpenTxn(Driver.java:1237) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1607) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1332) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1321) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor$$anon$1.run(HiveEngineConnExecutor.scala:152) ~[linkis-engineplugin-hive-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor$$anon$1.run(HiveEngineConnExecutor.scala:126) ~[linkis-engineplugin-hive-dev-1.0.0.jar:?]
+at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181]
+at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_181]
+at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) ~[hadoop-common-3.0.0-cdh6.3.2.jar:?]
+at com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor.executeLine(HiveEngineConnExecutor.scala:126) ~[linkis-engineplugin-hive-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1$$anonfun$apply$9$$anonfun$apply$10.apply(ComputationExecutor.scala:145) ~[linkis-computation -engineconn-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1$$anonfun$apply$9$$anonfun$apply$10.apply(ComputationExecutor.scala:144) ~[linkis-computation -engineconn-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.common.utils.Utils$.tryCatch(Utils.scala:48) ~[linkis-common-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1$$anonfun$apply$9.apply(ComputationExecutor.scala:146) ~[linkis-computation-engineconn-dev-1.0 .0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1$$anonfun$apply$9.apply(ComputationExecutor.scala:140) ~[linkis-computation-engineconn-dev-1.0 .0.jar:?]
+at scala.collection.immutable.Range.foreach(Range.scala:160) ~[scala-library-2.11.8.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1.apply(ComputationExecutor.scala:139) ~[linkis-computation-engineconn-dev-1.0.0.jar:? ]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1.apply(ComputationExecutor.scala:114) ~[linkis-computation-engineconn-dev-1.0.0.jar:? ]
+at com.webank.wedatasphere.linkis.common.utils.Utils$.tryFinally(Utils.scala:62) ~[linkis-common-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.acessible.executor.entity.AccessibleExecutor.ensureIdle(AccessibleExecutor.scala:42) ~[linkis-accessible-executor-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.acessible.executor.entity.AccessibleExecutor.ensureIdle(AccessibleExecutor.scala:36) ~[linkis-accessible-executor-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor.ensureOp(ComputationExecutor.scala:103) ~[linkis-computation-engineconn-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor.execute(ComputationExecutor.scala:114) ~[linkis-computation-engineconn-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.service.TaskExecutionServiceImpl$$anon$1$$anonfun$run$1.apply$mcV$sp(TaskExecutionServiceImpl.scala:139) [linkis-computation-engineconn-dev- 1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.service.TaskExecutionServiceImpl$$anon$1$$anonfun$run$1.apply(TaskExecutionServiceImpl.scala:138) [linkis-computation-engineconn-dev-1.0.0. jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.service.TaskExecutionServiceImpl$$anon$1$$anonfun$run$1.apply(TaskExecutionServiceImpl.scala:138) [linkis-computation-engineconn-dev-1.0.0. jar:?]
+at com.webank.wedatasphere.linkis.common.utils.Utils$.tryCatch(Utils.scala:48) [linkis-common-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.common.utils.Utils$.tryAndWarn(Utils.scala:74) [linkis-common-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.service.TaskExecutionServiceImpl$$anon$1.run(TaskExecutionServiceImpl.scala:138) [linkis-computation-engineconn-dev-1.0.0.jar:?]
+at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181]
+at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181]
+at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
+at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
+at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
+Caused by: java.lang.ClassNotFoundException: org.apache.curator.connection.ConnectionHandlingPolicy atjava.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[?:1.8.0_181]
+at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:1.8.0_181]
+at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) ~[?:1.8.0_181]
+at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_181]
+... 39 more
+```
+
+Solution: The reason is that there is a corresponding relationship between the version of Curator and the version of zookeeper. For Curator2.X, it supports Zookeeper3.4.X for Curator2.X, so if you are currently Zookeeper3.4.X, you should still use Curator2.X, for example: 2.7.0. Reference link: https://blog.csdn.net/muyingmiao/article/details/100183768
+
+#### Q15. When the python engine is scheduled, the following error is reported: Python proces is not alive:
+
+![linkis-exception-08.png](../Images/Tuning_and_Troubleshooting/linkis-exception-08.png)
+
+Solution: The server installed the anaconda3 package manager. After debugging python, two problems were found: (1) lack of pandas and matplotlib modules, which need to be installed manually; (2) when the new version of the python engine is executed, it depends on the higher version of python, first install python3, Next, make a symbolic link (as shown in the figure below) and restart the engineplugin service.
+
+![shell-error-02.png](../Images/Tuning_and_Troubleshooting/shell-error-02.png)
+
+#### Q16. When the spark engine is executed, the following error NoClassDefFoundError: org/apache/hadoop/hive/ql/io/orc/OrcFile is reported:
+
+```
+2021-03-19 15:12:49.227 INFO [dag-scheduler-event-loop] org.apache.spark.scheduler.DAGScheduler 57 logInfo -ShuffleMapStage 5 (show at <console>:69) failed in 21.269 s due to Job aborted due to stage failure: Task 1 in stage 5.0 failed 4 times, most recent failure: Lost task 1.3 in stage 5.0 (TID 139, cdh03, executor 6): java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql /io/orc/OrcFile
+at org.apache.spark.sql.hive.orc.OrcFileOperator$$anonfun$getFileReader$2.apply(OrcFileOperator.scala:75)
+at org.apache.spark.sql.hive.orc.OrcFileOperator$$anonfun$getFileReader$2.apply(OrcFileOperator.scala:73)
+at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
+at scala.collection.TraversableOnce$class.collectFirst(TraversableOnce.scala:145)
+at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1334)
+at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:90)
+at org.apache.spark.sql.hive.orc.OrcFileOperator$$anonfun$readSchema$2.apply(OrcFileOperator.scala:99)
+at org.apache.spark.sql.hive.orc.OrcFileOperator$$anonfun$readSchema$2.apply(OrcFileOperator.scala:99)
+at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
+at scala.collection.TraversableOnce$class.collectFirst(TraversableOnce.scala:145)
+at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1334)
+at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:99)
+at org.apache.spark.sql.hive.orc.OrcFileFormat$$anonfun$buildReader$2.apply(OrcFileFormat.scala:160)
+at org.apache.spark.sql.hive.orc.OrcFileFormat$$anonfun$buildReader$2.apply(OrcFileFormat.scala:151)
+at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:148)
+at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
+at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:126)
+at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:179)
+at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:103)
+at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(UnknownSource)
+at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
+at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$11$$anon$1.hasNext(WholeStageCodegenExec.scala:624)
+at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
+at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
+at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
+at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
+at org.apache.spark.scheduler.Task.run(Task.scala:121)
+at org.apache.spark.executor.Executor$TaskRunner$$anonfun$11.apply(Executor.scala:407)
+at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1408)
+at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:413)
+at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
+at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
+at java.lang.Thread.run(Thread.java:748)
+Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.io.orc.OrcFile
+at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
+at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
+at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
+at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
+... 33 more
+
+```
+
+Solution: cdh6.3.2 cluster spark engine classpath only has /opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/spark/jars, need to add hive-exec-2.1.1- cdh6.1.0.jar, then restart spark.
+
+#### Q17. When the spark engine starts, it reports queue default is not exists in YARN, the specific information is as follows:
+
+![linkis-exception-09.png](../Images/Tuning_and_Troubleshooting/linkis-exception-09.png)
+
+Solution: When the 1.0 linkis-resource-manager-dev-1.0.0.jar pulls queue information, there is a compatibility problem in parsing json. After the official classmates optimize it, re-provide a new package. The jar package path: /appcom/Install/dss- linkis/linkis/lib/linkis-computation-governance/linkis-cg-linkismanager/.
+
+#### Q18, when the spark engine starts, an error is reported get the Yarn queue information excepiton. (get the Yarn queue information abnormal) and http link abnormal
+
+Solution: To migrate the address configuration of yarn to the DB configuration, the following configuration needs to be added:
+ 
+![db-config-02.png](../Images/Tuning_and_Troubleshooting/db-config-02.png)
+
+#### Q19. When the spark engine is scheduled, it can be executed successfully for the first time, and if executed again, it will report Spark application sc has already stopped, please restart it. The specific errors are as follows:
+
+![page-show-03.png](../Images/Tuning_and_Troubleshooting/page-show-03.png)
+
+Solution: The background is that the architecture of the linkis1.0 engine has been adjusted. After the spark session is created, in order to avoid overhead and improve execution efficiency, the session is reused. When we execute spark.scala for the first time, there is spark.stop() in our script. This command will cause the newly created session to be closed. When executed again, it will prompt that the session is closed, please restart it. Solution: first remove stop() from all scripts, [...]
+
+#### Q20, pythonspark scheduling execution, error: initialize python executor failed ClassNotFoundException org.slf4j.impl.StaticLoggerBinder, as follows:
+
+![linkis-exception-10.png](../Images/Tuning_and_Troubleshooting/linkis-exception-10.png)
+
+Solution: The reason is that the spark server lacks slf4j-log4j12-1.7.25.jar, copy the above jar and report to /opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/spark/jars .
+
+#### Q21, pythonspark scheduling execution, error: initialize python executor failed, submit-version error, as follows:
+
+![shell-error-03.png](../Images/Tuning_and_Troubleshooting/shell-error-03.png)
+
+Solution: The reason is that the linkis1.0 pythonSpark engine has a bug in obtaining the spark version code. The fix is ​​as follows:
+
+![code-fix-01.png](../Images/Tuning_and_Troubleshooting/code-fix-01.png)
+
+#### Q22. When pythonspark is scheduled to execute, it reports TypeError: an integer is required (got type bytes) (executed separately from the command to pull up the engine), the details are as follows:
+
+![shell-error-04.png](../Images/Tuning_and_Troubleshooting/shell-error-04.png)
+
+Solution: The reason is that the system spark and python versions are not compatible, python is 3.8, spark is 2.4.0-cdh6.3.2, spark requires python version<=3.6, reduce python to 3.6, comment file /opt/cloudera/parcels/CDH/ The following lines of lib/spark/python/lib/pyspark.zip/pyspark/context.py:
+
+![shell-error-05.png](../Images/Tuning_and_Troubleshooting/shell-error-05.png)
+
+#### Q23, spark engine is 2.4.0+cdh6.3.2, python engine was previously lacking pandas, matplotlib upgraded local python to 3.8, but spark does not support python3.8, only supports below 3.6;
+
+Solution: reinstall the python package manager anaconda2, reduce python to 2.7, install pandas, matplotlib modules, python engine and spark engine can be scheduled normally.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/README.md b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/README.md
new file mode 100644
index 0000000..a92dca4
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/README.md
@@ -0,0 +1,98 @@
+## Tuning and troubleshooting
+
+In the process of preparing for the release of a version, we will try our best to find deployment and installation problems in advance and then repair them. Because everyone has some differences in the deployment environments, we sometimes have no way to predict all the problems and solutions in advance. However, due to the existence of the community, many of your problems will overlap. Perhaps the installation and deployment problems you have encountered have already been discovered and [...]
+
+### Ⅰ. How to locate the exception log
+
+If an interface request reports an error, we can locate the problematic microservice based on the return of the interface. Under normal circumstances, we can **locate according to the URL specification. **URLs in the Linkis interface follow certain design specifications. That is, the format of **/api/rest_j/v1/{applicationName}/.+**, the application name can be located through applicationName. Some applications themselves are microservices. At this time, the application name is the same  [...]
+
+| **ApplicationName** | **Microservice** |
+| -------------------- | -------------------- |
+| cg-linkismanager | cg-linkismanager |
+| cg-engineplugin | cg-engineplugin |
+| cg-engineconnmanager | cg-engineconnmanager |
+| cg-entrance | cg-entrance |
+| ps-bml | ps-bml |
+| ps-cs | ps-cs |
+| ps-datasource | ps-datasource |
+| configuration | |
+| instance-label | |
+| jobhistory | ps-publicservice |
+| variable | |
+| udf | |
+
+### Ⅱ. community issue column search keywords
+
+On the homepage of the github community, the issue column retains some of the problems and solutions encountered by community users, which is very suitable for quickly finding solutions after encountering problems, just search for keywords that report errors in the filter filter.
+
+### Ⅲ. "Q\&A Question Summary"
+
+"Linkis 1.0 FAQ", this document contains a summary of common problems and solutions during the installation and deployment process.
+
+### Ⅳ. Locating system log
+
+Generally, errors can be divided into three stages: an error is reported when installing and executing install.sh, an error is reported when the microservice is started, and an error is reported when the engine is started.
+
+1. **An error occurred when executing install.sh**, usually in the following situations
+
+   1. Missing environment variables: For example, the environment of java/python/Hadoop/hive/spark needs to be configured under the standard version, and the corresponding verification operation will be performed when the script is installed. If you encounter this kind of problem, there will be a lot of problems. Clear prompts for missing environment variables, such as exception -bash
+      spark-submit: command not found, etc.
+
+   2. The system version does not match: Linkis currently supports most versions of Linux.
+      The compatibility of the os version is the best, and some system versions may have command incompatibility. For example, the poor compatibility of yum in ubantu may cause yum-related errors in the installation and deployment. In addition, it is also recommended not to use windows as much as possible. Deploying linkis, currently no script is fully compatible with the .bat command.
+
+   3. Missing configuration item: There are two configuration files that need to be modified in linkis1.0 version, linkis-env.sh and db.sh
+   
+      The former contains the environment parameters that linkis needs to load during execution, and the latter is the database information that linkis itself needs to store related tables. Under normal circumstances, if the corresponding configuration is missing, the error message will show an exception related to the Key value. For example, when db.sh does not fill in the relevant database configuration, unknow will appear mysql server host ‘-P’ is abnormal, which is caused by missing host.
+
+2. **Report error when starting microservice**
+
+    Linkis puts the log files of all microservices into the logs directory. The log directory levels are as follows:
+
+    ````
+    ├── linkis-computation-governance
+    │ ├── linkis-cg-engineconnmanager
+    │ ├── linkis-cg-engineplugin
+    │ ├── linkis-cg-entrance
+    │ └── linkis-cg-linkismanager
+    ├── linkis-public-enhancements
+    │ ├── linkis-ps-bml
+    │ ├── linkis-ps-cs
+    │ ├── linkis-ps-datasource
+    │ └── linkis-ps-publicservice
+    └── linkis-spring-cloud-services
+    │ ├── linkis-mg-eureka
+    └─├── linkis-mg-gateway
+    ````
+
+    It includes three microservice modules: computing governance, public enhancement, and microservice management. Each microservice contains three logs, linkis-gc.log, linkis.log, and linkis.out, corresponding to the service's GC log, service log, and service System.out log.
+    
+    Under normal circumstances, when an error occurs when starting a microservice, you can cd to the corresponding service in the log directory to view the related log to troubleshoot the problem. Generally, the most frequently occurring problems can also be divided into three categories:
+
+    1.	**Port Occupation**: Since the default port of Linkis microservices is mostly concentrated at 9000, it is necessary to check whether the port of each microservice is occupied by other microservices before starting. If it is occupied, you need to change conf/ The microservice port corresponding to the linkis-env.sh file
+    
+    2.	**Necessary configuration parameters are missing**: For some microservices, certain user-defined parameters must be loaded before they can be started normally. For example, the linkis-cg-engineplugin microservice will load conf/ when it starts. For the configuration related to wds.linkis.engineconn.\* in linkis.properties, if the user changes the Linkis path after installation, if the configuration does not correspond to the modification, an error will be reported when the linkis- [...]
+    
+    3.	**System environment is not compatible**: It is recommended that users refer to the recommended system and application versions in the official documents as much as possible when deploying and installing, and install necessary system plug-ins, such as expect, yum, etc. If the application version is not compatible, It may cause errors related to the application. For example, the incompatibility of SQL statements in the mysql5.7 version may cause errors in the linkis.ddl and linkis. [...]
+    
+3. **Report error during microservice execution period**
+
+    The situation of error reporting during the execution of microservices is more complicated, and the situations encountered are also different depending on the environment, but the troubleshooting methods are basically the same. Starting from the corresponding microservice error catalog, we can roughly divide it into three situations:
+    
+    1. **Manually installed and deployed microservices report errors**: The logs of this type of microservice are unified under the log/ directory. After locating the microservice, enter the corresponding directory to view it.
+    
+    2. **engine start failure**: insufficient resources, request engine failure: When this type of error occurs, it is not necessarily due to insufficient resources, because the front end will only grab the logs after the Spring project is started, for errors before the engine is started cannot be fetched well. There are three kinds of high-frequency problems found in the actual use process of internal test users:
+    
+        a. **The engine cannot be created because there is no engine directory permission**: The log will be printed to the linkis.out file under the cg-engineconnmanager microservice. You need to enter the file to view the specific reason.
+        
+        b. **There is a dependency conflict in the engine lib package**, **The server cannot start normally because of insufficient memory resources: **Since the engine directory has been created, the log will be printed to the stdout file under the engine, and the engine path can refer to c
+        
+        c. **Errors reported during engine execution**: Each started engine is a microservice that is dynamically loaded and started during runtime. When the engine is started, if an error occurs, you need to find the corresponding log of the engine in the corresponding startup user directory. The corresponding root path is **ENGINECONN_ROOT_PATH** filled in **linkis-env** before installation. If you need to modify the path after installation, you need to modify wds.linkis.engineconn.roo [...]
+        
+### Ⅴ. Community user group consultation and communication
+
+For problems that cannot be resolved according to the above process positioning during the installation and deployment process, you can send error messages in our community group. In order to facilitate community partners and developers to help solve them and improve efficiency, it is recommended that when you ask questions, You can describe the problem phenomenon, related log information, and the places that have been checked are sent out together. If you think it may be an environmenta [...]
+
+### Ⅵ. locate the source code by remote debug
+
+Under normal circumstances, remote debugging of source code is the most effective way to locate problems, but compared to document review, users need to have a certain understanding of the source code structure. It is recommended that you check the [Linkis source code level detailed structure](https://github.com/WeBankFinTech/Linkis/wiki/Linkis%E6%BA%90%E7%A0%81%E5%B1%82%E7%BA%A7%E7%BB%93%E6%9E%84%E8%AF%A6%E8%A7%A3) in the Linkis WIKI before remote debugging.After having a certain degree [...]
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Tuning.md b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Tuning.md
new file mode 100644
index 0000000..2b6b256
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Tuning.md
@@ -0,0 +1,61 @@
+>Linkis0.x version runs stably on the production environment of WeBank, and supports various businesses. Linkis1.0 is an optimized version of 0.x, and the related tuning logic has not changed, so this document will introduce several Linkis deployment and tuning suggestions. Due to limited space, this article cannot cover all optimization scenarios. Related tuning guides will also be supplemented and updated. Of course, we also hope that community users will provide suggestions for Linkis [...]
+
+## 1. Overview
+
+This document will introduce several tuning methods based on production experience, namely the selection of Jvm heap size during deployment in production, the setting of concurrency for task submission, and the introduction of task running resource application parameters. The parameter settings described in the document are not recommended parameter values. Users need to select the parameters according to their actual production environment.
+
+## 2. Jvm heap size tuning 
+
+When installing Linkis, you can find the following variables in linkis-env.sh:
+
+```shell
+SERVER_HEAP_SIZE="512M"
+```
+
+After setting this variable, it will be added to the java startup parameters of each microservice during installation to control the Jvm startup heap size. Although the xms and xmx parameters need to be set when java is started, they are usually set to the same value. In production, as the number of users increases, this parameter needs to be adjusted larger to meet the needs. Of course, setting a larger stack memory requires a larger server configuration. Also, single-machine deployment [...]
+
+## 3. Tuning the concurrency of task submission
+
+Some Linkis task concurrency parameters will have a default value. In most scenarios, the default value can meet the demand, but sometimes it cannot, so it needs to be adjusted. This article will introduce several parameters for adjusting the concurrency of tasks to facilitate users to optimize concurrent tasks in production.
+
+Since tasks are submitted by RPC, in the linkis-common/linkis-rpc module, you can configure the following parameters to increase the number of concurrent rpc:
+
+```shell
+wds.linkis.rpc.receiver.asyn.consumer.thread.max=400
+wds.linkis.rpc.receiver.asyn.queue.size.max=5000
+wds.linkis.rpc.sender.asyn.consumer.thread.max=100
+wds.linkis.rpc.sender.asyn.queue.size.max=2000
+```
+
+In the Linkis source code, we set a default value for the number of concurrent tasks, which can meet the needs in most scenarios. However, when a large number of concurrent tasks are submitted for execution in some scenarios, such as when Qualitis (another open source project of WeBank) is used for mass data verification, initCapacity and maxCapacity have not been upgraded to a configurable item in the current version. Users need to modify, by increasing the values of these two parameter [...]
+
+```java
+  private val groupNameToGroups = new JMap[String, Group]
+  private val labelBuilderFactory = LabelBuilderFactoryContext.getLabelBuilderFactory
+
+  override def getOrCreateGroup(groupName: String): Group = {
+    if (!groupNameToGroups.containsKey(groupName)) synchronized {
+      val initCapacity = 100
+      val maxCapacity = 100
+      // 其它代码...
+        }
+      }
+```
+
+## 4. Resource settings related to task runtime
+
+When submitting a task to run on Yarn, Yarn provides a configurable interface. As a highly scalable framework, Linkis can also be configured to set resource configuration.
+
+The related configuration of Spark and Hive are as follows:
+
+Part of the Spark configuration in linkis-engineconn-plugins/engineconn-plugins, you can adjust the configuration to change the runtime environment of tasks submitted to Yarn. Due to limited space, such as more about Hive, Yarn configuration requires users to refer to the source code and the parameters documentation.
+
+```shell
+"spark.driver.memory" = 2 //单位为G
+"wds.linkis.driver.cores" = 1
+"spark.executor.memory" = 4 //单位为G
+"spark.executor.cores" = 2
+"spark.executor.instances" = 3
+"wds.linkis.rm.yarnqueue" = "default"
+```
+
diff --git a/Linkis-Doc-master/en_US/Upgrade_Documents/Linkis_Upgrade_from_0.x_to_1.0_guide.md b/Linkis-Doc-master/en_US/Upgrade_Documents/Linkis_Upgrade_from_0.x_to_1.0_guide.md
new file mode 100644
index 0000000..dc1b867
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Upgrade_Documents/Linkis_Upgrade_from_0.x_to_1.0_guide.md
@@ -0,0 +1,73 @@
+ > This article briefly introduces the precautions for upgrading Linkis from 0.X to 1.0. Linkis 1.0 has adjusted several Linkis services with major changes. This article will introduce the precautions for upgrading from 0.X to 1.X.
+
+## 1.Precautions
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**If you are using Linkis for the first time, you can ignore this chapter; if you are already a user of Linkis, it is recommended to read it before installing or upgrading:[Brief description of the difference between Linkis1.0 and Linkis0.X](https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E4%B8%8ELinkis0.X%E7%9A%84%E5%8C%BA%E5%88%AB%E7%AE%80%E8%BF%B0)**.
+
+## 2. Service upgrade installation
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Because linkis 1.0 basically upgraded all services, including service names, all services need to be reinstalled when upgrading from 0.X to 1.X.
+
+&nbsp;&nbsp;&nbsp;&nbsp;  If you need to keep 0.X data during the upgrade, you must select 1 to skip the table building statement (see the code below).
+
+&nbsp;&nbsp;&nbsp;&nbsp;  For the installation of Linkis1.0, please refer to [Quick Deployment Linkis1.0](../Deployment_Documents/Quick_Deploy_Linkis1.0.md)
+
+```
+Do you want to clear Linkis table information in the database?
+1: Do not execute table-building statements
+2: Dangerous! Clear all data and rebuild the tables
+other: exit
+
+Please input the choice: ## choice 1
+```
+## 3. Database upgrade
+
+&nbsp;&nbsp;&nbsp;&nbsp;  After the service is installed, the database structure needs to be modified, including table structure changes and new tables and data:
+
+### 3.1 Table structure modification part:
+
+&nbsp;&nbsp;&nbsp;&nbsp;  linkis_task: The submit_user and label_json fields are added to the table. The update statement is:
+
+```mysql-sql
+ALTER TABLE linkis_task ADD submit_user varchar(50) DEFAULT NULL COMMENT 'submitUser name';
+ALTER TABLE linkis_task ADD `label_json` varchar(200) DEFAULT NULL COMMENT 'label json';
+```
+
+### 3.2 Need newly executed sql:
+
+```mysql-sql
+cd db/module
+## Add the tables that the enginePlugin service depends on:
+source linkis_ecp.sql
+## Add a table that the public service-instanceLabel service depends on
+source linkis_instance_label.sql
+## Added tables that the linkis-manager service depends on
+source linkis_manager.sql
+```
+
+### 3.3 Publicservice-Configuration table modification
+
+&nbsp;&nbsp;&nbsp;&nbsp;  In order to support the full labeling capability of Linkis 1.X, all the data tables related to the configuration module have been upgraded to labeling, which is completely different from the 0.X Configuration table. It is necessary to re-execute the table creation statement and the initialization statement.
+
+&nbsp;&nbsp;&nbsp;&nbsp;  This means that **Linkis0.X users' existing engine configuration parameters can no longer be migrated to Linkis1.0** (it is recommended that users reconfigure the engine parameters once).
+
+&nbsp;&nbsp;&nbsp;&nbsp;  The execution of the table building statement is as follows:
+
+```mysql-sql
+source linkis_configuration.sql
+```
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Because Linkis 1.0 supports multiple versions of the engine, it is necessary to modify the version of the engine when executing the initialization statement, as shown below:
+
+```mysql-sql
+vim linkis_configuration_dml.sql
+## Modify the default version of the corresponding engine
+SET @SPARK_LABEL="spark-2.4.3";
+SET @HIVE_LABEL="hive-1.2.1";
+## Execute the initialization statement
+source linkis_configuration_dml.sql
+```
+
+## 4. Installation and startup Linkis1.0
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Start Linkis 1.0  to verify whether the service has been started normally and provide external services. For details, please refer to: [Quick Deployment Linkis1.0](../Deployment_Documents/Quick_Deploy_Linkis1.0.md)
diff --git a/Linkis-Doc-master/en_US/Upgrade_Documents/README.md b/Linkis-Doc-master/en_US/Upgrade_Documents/README.md
new file mode 100644
index 0000000..37786ab
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Upgrade_Documents/README.md
@@ -0,0 +1,5 @@
+The architecture of Linkis1.0 is very different from Linkis0.x, and there are some changes to the configuration of the deployment package and database tables. Before you install Linkis1.0, please read the following instructions carefully:
+
+1. If you are installing Linkis for the first time, or reinstalling Linkis, you do not need to pay attention to the Linkis Upgrade Guide.
+
+2. If you are upgrading from Linkis0.x to Linkis1.0, be sure to read the [Linkis Upgrade from 0.x to 1.0 guide](Linkis_Upgrade_from_0.x_to_1.0_guide.md) carefully.
diff --git a/Linkis-Doc-master/en_US/User_Manual/How_To_Use_Linkis.md b/Linkis-Doc-master/en_US/User_Manual/How_To_Use_Linkis.md
new file mode 100644
index 0000000..a6ee4d7
--- /dev/null
+++ b/Linkis-Doc-master/en_US/User_Manual/How_To_Use_Linkis.md
@@ -0,0 +1,29 @@
+# How to use Linkis?
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;In order to meet the needs of different usage scenarios, Linkis provides a variety of usage and access methods, which can be summarized into three categories, namely Client-side use, Scriptis-side use, and DataSphere It is used on the Studio side, among which Scriptis and DataSphere Studio are the open source data analysis platforms of the WeBank Big Data Platform Room. Since these two projects are essentially compatible with Linkis, it is  [...]
+
+## 1. Client side usage
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;If you need to connect to other applications on the basis of Linkis, you need to develop the interface provided by Linkis. Linkis provides a variety of client access interfaces. For detailed usage introduction, please refer to the following:
+-[**Restful API Usage**](./../API_Documentations/Linkis task submission and execution RestAPI document.md)
+-[**JDBC API Usage**](./../API_Documentations/Task Submit and Execute JDBC_API Document.md)
+-[**How ​​to use Java SDK**](./../User_Manual/Linkis1.0 user use document.md)
+
+## 2. Scriptis uses Linkis
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;If you need to use Linkis to complete interactive online analysis and processing, and you do not need data analysis application tools such as workflow development, workflow scheduling, data services, etc., you can Install [**Scriptis**](https://github.com/WeBankFinTech/Scriptis) separately. For detailed installation tutorial, please refer to its corresponding installation and deployment documents.
+
+## 2.1. Use Scriptis to execute scripts
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Currently Scriptis supports submitting a variety of task types to Linkis, including Spark SQL, Hive SQL, Scala, PythonSpark, etc. In order to meet the needs of data analysis, the left side of Scriptis, Provides viewing user workspace information, user database and table information, user-defined functions, and HDFS directories. It also supports uploading and downloading, result set exporting and other functions. Scriptis is very simple to u [...]
+![Scriptis uses Linkis](../Images/EngineUsage/sparksql-run.png)
+
+## 2.2. Scriptis Management Console
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis provides an interface for resource configuration and management. If you want to configure and manage task resources, you can set it on the Scriptis management console interface, including queue settings and resource configuration , The number of engine instances, etc. Through the management console, you can easily configure the resources for submitting tasks to Linkis, making it more convenient and faster.
+![Scriptis uses Linkis](../Images/EngineUsage/queue-set.png)
+
+## 3. DataSphere Studio uses Linkis
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[**DataSphere Studio**](https://github.com/WeBankFinTech/DataSphereStudio), referred to as DSS, is an open source part of WeBank’s big data platform Station-type data analysis and processing platform, the DSS interactive analysis module integrates Scriptis. Using DSS for interactive analysis is the same as Scriptis. In addition to providing the basic functions of Scriptis, DSS provides and integrates richer and more powerful data analysis f [...]
+![DSS Run Workflow](../Images/EngineUsage/workflow.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/User_Manual/Linkis1.0_User_Manual.md b/Linkis-Doc-master/en_US/User_Manual/Linkis1.0_User_Manual.md
new file mode 100644
index 0000000..b613f88
--- /dev/null
+++ b/Linkis-Doc-master/en_US/User_Manual/Linkis1.0_User_Manual.md
@@ -0,0 +1,400 @@
+# Linkis User Manual
+
+> Linkis provides a convenient interface for calling JAVA and SCALA. It can be used only by introducing the linkis-computation-client module. After 1.0, the method of submitting with Label is added. The following will introduce both ways that compatible with 0.X and newly added in 1.0.
+
+## 1. Introduce dependent modules
+```
+<dependency>
+   <groupId>com.webank.wedatasphere.linkis</groupId>
+   <artifactId>linkis-computation-client</artifactId>
+   <version>${linkis.version}</version>
+</dependency>
+Such as:
+<dependency>
+   <groupId>com.webank.wedatasphere.linkis</groupId>
+   <artifactId>linkis-computation-client</artifactId>
+   <version>1.0.0-RC1</version>
+</dependency>
+```
+
+## 2. Compatible with 0.X Execute method submission
+
+### 2.1 Java test code
+
+Create the Java test class UJESClientImplTestJ. Refer to the comments to understand the purposes of those interfaces:
+
+```java
+package com.webank.wedatasphere.linkis.client.test;
+
+import com.webank.wedatasphere.linkis.common.utils.Utils;
+import com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy;
+import com.webank.wedatasphere.linkis.httpclient.dws.authentication.TokenAuthenticationStrategy;
+import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfig;
+import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder;
+import com.webank.wedatasphere.linkis.ujes.client.UJESClient;
+import com.webank.wedatasphere.linkis.ujes.client.UJESClientImpl;
+import com.webank.wedatasphere.linkis.ujes.client.request.JobExecuteAction;
+import com.webank.wedatasphere.linkis.ujes.client.request.ResultSetAction;
+import com.webank.wedatasphere.linkis.ujes.client.response.JobExecuteResult;
+import com.webank.wedatasphere.linkis.ujes.client.response.JobInfoResult;
+import com.webank.wedatasphere.linkis.ujes.client.response.JobProgressResult;
+import org.apache.commons.io.IOUtils;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+public class LinkisClientTest {
+
+    public static void main(String[] args){
+
+        String user = "hadoop";
+        String executeCode = "show databases;";
+
+        // 1. Configure DWSClientBuilder, get a DWSClientConfig through DWSClientBuilder
+        DWSClientConfig clientConfig = ((DWSClientConfigBuilder) (DWSClientConfigBuilder.newBuilder()
+                .addServerUrl("http://${ip}:${port}")  //Specify ServerUrl, the address of the linkis gateway, such as http://{ip}:{port}
+                .connectionTimeout(30000)   //connectionTimeOut Client connection timeout
+                .discoveryEnabled(false).discoveryFrequency(1, TimeUnit.MINUTES)  //Whether to enable registration discovery, if enabled, the newly launched Gateway will be automatically discovered
+                .loadbalancerEnabled(true)  // Whether to enable load balancing, if registration discovery is not enabled, load balancing is meaningless
+                .maxConnectionSize(5)   //Specify the maximum number of connections, that is, the maximum number of concurrent
+                .retryEnabled(false).readTimeout(30000)   //Execution failed, whether to allow retry
+                .setAuthenticationStrategy(new StaticAuthenticationStrategy())   //AuthenticationStrategy Linkis login authentication method
+                .setAuthTokenKey("${username}").setAuthTokenValue("${password}")))  //Authentication key, generally the user name; authentication value, generally the password corresponding to the user name
+                .setDWSVersion("v1").build();  //The version of the linkis backend protocol, the current version is v1
+
+        // 2. Obtain a UJESClient through DWSClientConfig
+        UJESClient client = new UJESClientImpl(clientConfig);
+
+        try {
+            // 3. Start code execution
+            System.out.println("user : " + user + ", code : [" + executeCode + "]");
+            Map<String, Object> startupMap = new HashMap<String, Object>();
+            startupMap.put("wds.linkis.yarnqueue", "default"); // A variety of startup parameters can be stored in startupMap, see linkis management console configuration
+            JobExecuteResult jobExecuteResult = client.execute(JobExecuteAction.builder()
+                    .setCreator("linkisClient-Test")  //creator,the system name of the client requesting linkis, used for system-level isolation
+                    .addExecuteCode(executeCode)   //ExecutionCode Requested code
+                    .setEngineType((JobExecuteAction.EngineType) JobExecuteAction.EngineType$.MODULE$.HIVE()) // The execution engine type of the linkis that you want to request, such as Spark hive, etc.
+                    .setUser(user)   //User,Requesting users; used for user-level multi-tenant isolation
+                    .setStartupParams(startupMap)
+                    .build());
+            System.out.println("execId: " + jobExecuteResult.getExecID() + ", taskId: " + jobExecuteResult.taskID());
+
+            // 4. Get the execution status of the script
+            JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
+            int sleepTimeMills = 1000;
+            while(!jobInfoResult.isCompleted()) {
+                // 5. Get the execution progress of the script
+                JobProgressResult progress = client.progress(jobExecuteResult);
+                Utils.sleepQuietly(sleepTimeMills);
+                jobInfoResult = client.getJobInfo(jobExecuteResult);
+            }
+
+            // 6. Get the job information of the script
+            JobInfoResult jobInfo = client.getJobInfo(jobExecuteResult);
+            // 7. Get a list of result sets (if the user submits multiple SQL at a time, multiple result sets will be generated)
+            String resultSet = jobInfo.getResultSetList(client)[0];
+            // 8. Get a specific result set through a result set information
+            Object fileContents = client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build()).getFileContent();
+            System.out.println("fileContents: " + fileContents);
+
+        } catch (Exception e) {
+            e.printStackTrace();
+            IOUtils.closeQuietly(client);
+        }
+        IOUtils.closeQuietly(client);
+    }
+}
+```
+
+Run the above code to interact with Linkis
+
+### 3. Scala test code:
+
+```scala
+package com.webank.wedatasphere.linkis.client.test
+
+import java.util.concurrent.TimeUnit
+
+import com.webank.wedatasphere.linkis.common.utils.Utils
+import com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy
+import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder
+import com.webank.wedatasphere.linkis.ujes.client.UJESClient
+import com.webank.wedatasphere.linkis.ujes.client.request.JobExecuteAction.EngineType
+import com.webank.wedatasphere.linkis.ujes.client.request.{JobExecuteAction, ResultSetAction}
+import org.apache.commons.io.IOUtils
+
+object LinkisClientImplTest extends App {
+
+  var executeCode = "show databases;"
+  var user = "hadoop"
+
+  // 1. Configure DWSClientBuilder, get a DWSClientConfig through DWSClientBuilder
+  val clientConfig = DWSClientConfigBuilder.newBuilder()
+    .addServerUrl("http://${ip}:${port}") //Specify ServerUrl, the address of the Linkis server-side gateway, such as http://{ip}:{port}
+    .connectionTimeout(30000) //connectionTimeOut client connection timeout
+    .discoveryEnabled(false).discoveryFrequency(1, TimeUnit.MINUTES) //Whether to enable registration discovery, if enabled, the newly launched Gateway will be automatically discovered
+    .loadbalancerEnabled(true) // Whether to enable load balancing, if registration discovery is not enabled, load balancing is meaningless
+    .maxConnectionSize(5) //Specify the maximum number of connections, that is, the maximum number of concurrent
+    .retryEnabled(false).readTimeout(30000) //execution failed, whether to allow retry
+    .setAuthenticationStrategy(new StaticAuthenticationStrategy()) //AuthenticationStrategy Linkis authentication method
+    .setAuthTokenKey("${username}").setAuthTokenValue("${password}") //Authentication key, generally the user name; authentication value, generally the password corresponding to the user name
+    .setDWSVersion("v1").build() //Linkis backend protocol version, the current version is v1
+
+  // 2. Get a UJESClient through DWSClientConfig
+  val client = UJESClient(clientConfig)
+  
+  try {
+    // 3. Start code execution
+    println("user: "+ user + ", code: [" + executeCode + "]")
+    val startupMap = new java.util.HashMap[String, Any]()
+    startupMap.put("wds.linkis.yarnqueue", "default") //Startup parameter configuration
+    val jobExecuteResult = client.execute(JobExecuteAction.builder()
+      .setCreator("LinkisClient-Test") //creator, requesting the system name of the Linkis client, used for system-level isolation
+      .addExecuteCode(executeCode) //ExecutionCode The code to be executed
+      .setEngineType(EngineType.SPARK) // The execution engine type of Linkis that you want to request, such as Spark hive, etc.
+      .setStartupParams(startupMap)
+      .setUser(user).build()) //User, request user; used for user-level multi-tenant isolation
+    println("execId: "+ jobExecuteResult.getExecID + ", taskId:" + jobExecuteResult.taskID)
+    
+    // 4. Get the execution status of the script
+    var jobInfoResult = client.getJobInfo(jobExecuteResult)
+    val sleepTimeMills: Int = 1000
+    while (!jobInfoResult.isCompleted) {
+      // 5. Get the execution progress of the script
+      val progress = client.progress(jobExecuteResult)
+      val progressInfo = if (progress.getProgressInfo != null) progress.getProgressInfo.toList else List.empty
+      println("progress: "+ progress.getProgress + ", progressInfo:" + progressInfo)
+      Utils.sleepQuietly(sleepTimeMills)
+      jobInfoResult = client.getJobInfo(jobExecuteResult)
+    }
+    if (!jobInfoResult.isSucceed) {
+      println("Failed to execute job: "+ jobInfoResult.getMessage)
+      throw new Exception(jobInfoResult.getMessage)
+    }
+
+    // 6. Get the job information of the script
+    val jobInfo = client.getJobInfo(jobExecuteResult)
+    // 7. Get the list of result sets (if the user submits multiple SQL at a time, multiple result sets will be generated)
+    val resultSetList = jobInfoResult.getResultSetList(client)
+    println("All result set list:")
+    resultSetList.foreach(println)
+    val oneResultSet = jobInfo.getResultSetList(client).head
+    // 8. Get a specific result set through a result set information
+    val fileContents = client.resultSet(ResultSetAction.builder().setPath(oneResultSet).setUser(jobExecuteResult.getUser).build()).getFileContent
+    println("First fileContents: ")
+    println(fileContents)
+  } catch {
+    case e: Exception => {
+      e.printStackTrace()
+    }
+  }
+  IOUtils.closeQuietly(client)
+}
+```
+
+## 3. Linkis1.0 new submit interface with Label support
+
+Linkis1.0 adds the client.submit method, which is used to adapt with the new task execution interface of 1.0, and supports the input of Label and other parameters
+
+### 3.1 Java Test Class
+
+```java
+package com.webank.wedatasphere.linkis.client.test;
+
+import com.webank.wedatasphere.linkis.common.utils.Utils;
+import com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy;
+import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfig;
+import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder;
+import com.webank.wedatasphere.linkis.manager.label.constant.LabelKeyConstant;
+import com.webank.wedatasphere.linkis.protocol.constants.TaskConstant;
+import com.webank.wedatasphere.linkis.ujes.client.UJESClient;
+import com.webank.wedatasphere.linkis.ujes.client.UJESClientImpl;
+import com.webank.wedatasphere.linkis.ujes.client.request.JobSubmitAction;
+import com.webank.wedatasphere.linkis.ujes.client.request.ResultSetAction;
+import com.webank.wedatasphere.linkis.ujes.client.response.JobExecuteResult;
+import com.webank.wedatasphere.linkis.ujes.client.response.JobInfoResult;
+import com.webank.wedatasphere.linkis.ujes.client.response.JobProgressResult;
+import org.apache.commons.io.IOUtils;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+public class JavaClientTest {
+
+    public static void main(String[] args){
+
+        String user = "hadoop";
+        String executeCode = "show tables";
+
+        // 1. Configure ClientBuilder and get ClientConfig
+        DWSClientConfig clientConfig = ((DWSClientConfigBuilder) (DWSClientConfigBuilder.newBuilder()
+                .addServerUrl("http://${ip}:${port}") //Specify ServerUrl, the address of the linkis server-side gateway, such as http://{ip}:{port}
+                .connectionTimeout(30000) //connectionTimeOut client connection timeout
+                .discoveryEnabled(false).discoveryFrequency(1, TimeUnit.MINUTES) //Whether to enable registration discovery, if enabled, the newly launched Gateway will be automatically discovered
+                .loadbalancerEnabled(true) // Whether to enable load balancing, if registration discovery is not enabled, load balancing is meaningless
+                .maxConnectionSize(5) //Specify the maximum number of connections, that is, the maximum number of concurrent
+                .retryEnabled(false).readTimeout(30000) //execution failed, whether to allow retry
+                .setAuthenticationStrategy(new StaticAuthenticationStrategy()) //AuthenticationStrategy Linkis authentication method
+                .setAuthTokenKey("${username}").setAuthTokenValue("${password}"))) //Authentication key, generally the user name; authentication value, generally the password corresponding to the user name
+                .setDWSVersion("v1").build(); //Linkis background protocol version, the current version is v1
+
+        // 2. Get a UJESClient through DWSClientConfig
+        UJESClient client = new UJESClientImpl(clientConfig);
+
+        try {
+            // 3. Start code execution
+            System.out.println("user: "+ user + ", code: [" + executeCode + "]");
+            Map<String, Object> startupMap = new HashMap<String, Object>();
+            // A variety of startup parameters can be stored in startupMap, see linkis management console configuration
+            startupMap.put("wds.linkis.yarnqueue", "q02");
+            //Specify Label
+            Map<String, Object> labels = new HashMap<String, Object>();
+            //Add the label that this execution depends on: EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel
+            labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "hive-1.2.1");
+            labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");
+            labels.put(LabelKeyConstant.ENGINE_RUN_TYPE_KEY, "hql");
+            //Specify source
+            Map<String, Object> source = new HashMap<String, Object>();
+            source.put(TaskConstant.SCRIPTPATH, "LinkisClient-test");
+            JobExecuteResult jobExecuteResult = client.submit( JobSubmitAction.builder()
+                    .addExecuteCode(executeCode)
+                    .setStartupParams(startupMap)
+                    .setUser(user)//Job submit user
+                    .addExecuteUser(user)//The actual execution user
+                    .setLabels(labels)
+                    .setSource(source)
+                    .build()
+            );
+            System.out.println("execId: "+ jobExecuteResult.getExecID() + ", taskId:" + jobExecuteResult.taskID());
+
+            // 4. Get the execution status of the script
+            JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
+            int sleepTimeMills = 1000;
+            while(!jobInfoResult.isCompleted()) {
+                // 5. Get the execution progress of the script
+                JobProgressResult progress = client.progress(jobExecuteResult);
+                Utils.sleepQuietly(sleepTimeMills);
+                jobInfoResult = client.getJobInfo(jobExecuteResult);
+            }
+
+            // 6. Get the job information of the script
+            JobInfoResult jobInfo = client.getJobInfo(jobExecuteResult);
+            // 7. Get the list of result sets (if the user submits multiple SQL at a time, multiple result sets will be generated)
+            String resultSet = jobInfo.getResultSetList(client)[0];
+            // 8. Get a specific result set through a result set information
+            Object fileContents = client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build()).getFileContent();
+            System.out.println("fileContents: "+ fileContents);
+
+        } catch (Exception e) {
+            e.printStackTrace();
+            IOUtils.closeQuietly(client);
+        }
+        IOUtils.closeQuietly(client);
+    }
+}
+
+```
+
+### 3.2 Scala Test Class
+
+```scala
+package com.webank.wedatasphere.linkis.client.test
+
+import java.util
+import java.util.concurrent.TimeUnit
+
+import com.webank.wedatasphere.linkis.common.utils.Utils
+import com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy
+import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder
+import com.webank.wedatasphere.linkis.manager.label.constant.LabelKeyConstant
+import com.webank.wedatasphere.linkis.protocol.constants.TaskConstant
+import com.webank.wedatasphere.linkis.ujes.client.UJESClient
+import com.webank.wedatasphere.linkis.ujes.client.request.{JobSubmitAction, ResultSetAction}
+import org.apache.commons.io.IOUtils
+
+
+object ScalaClientTest {
+
+  def main(args: Array[String]): Unit = {
+    val executeCode = "show tables"
+    val user = "hadoop"
+
+    // 1. Configure DWSClientBuilder, get a DWSClientConfig through DWSClientBuilder
+    val clientConfig = DWSClientConfigBuilder.newBuilder()
+      .addServerUrl("http://${ip}:${port}") //Specify ServerUrl, the address of the Linkis server-side gateway, such as http://{ip}:{port}
+      .connectionTimeout(30000) //connectionTimeOut client connection timeout
+      .discoveryEnabled(false).discoveryFrequency(1, TimeUnit.MINUTES) //Whether to enable registration discovery, if enabled, the newly launched Gateway will be automatically discovered
+      .loadbalancerEnabled(true) // Whether to enable load balancing, if registration discovery is not enabled, load balancing is meaningless
+      .maxConnectionSize(5) //Specify the maximum number of connections, that is, the maximum number of concurrent
+      .retryEnabled(false).readTimeout(30000) //execution failed, whether to allow retry
+      .setAuthenticationStrategy(new StaticAuthenticationStrategy()) //AuthenticationStrategy Linkis authentication method
+      .setAuthTokenKey("${username}").setAuthTokenValue("${password}") //Authentication key, generally the user name; authentication value, generally the password corresponding to the user name
+      .setDWSVersion("v1").build() //Linkis backend protocol version, the current version is v1
+
+    // 2. Get a UJESClient through DWSClientConfig
+    val client = UJESClient(clientConfig)
+
+    try {
+      // 3. Start code execution
+      println("user: "+ user + ", code: [" + executeCode + "]")
+      val startupMap = new java.util.HashMap[String, Any]()
+      startupMap.put("wds.linkis.yarnqueue", "q02") //Startup parameter configuration
+      //Specify Label
+      val labels: util.Map[String, Any] = new util.HashMap[String, Any]
+      //Add the label that this execution depends on, such as engineLabel
+      labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "hive-1.2.1")
+      labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE")
+      labels.put(LabelKeyConstant.ENGINE_RUN_TYPE_KEY, "hql")
+      //Specify source
+      val source: util.Map[String, Any] = new util.HashMap[String, Any]
+      source.put(TaskConstant.SCRIPTPATH, "LinkisClient-test")
+      val jobExecuteResult = client.submit(JobSubmitAction.builder
+          .addExecuteCode(executeCode)
+          .setStartupParams(startupMap)
+          .setUser(user) //Job submit user
+          .addExecuteUser(user) //The actual execution user
+          .setLabels(labels)
+          .setSource(source)
+          .build) //User, requesting user; used for user-level multi-tenant isolation
+      println("execId: "+ jobExecuteResult.getExecID + ", taskId:" + jobExecuteResult.taskID)
+
+      // 4. Get the execution status of the script
+      var jobInfoResult = client.getJobInfo(jobExecuteResult)
+      val sleepTimeMills: Int = 1000
+      while (!jobInfoResult.isCompleted) {
+        // 5. Get the execution progress of the script
+        val progress = client.progress(jobExecuteResult)
+        val progressInfo = if (progress.getProgressInfo != null) progress.getProgressInfo.toList else List.empty
+        println("progress: "+ progress.getProgress + ", progressInfo:" + progressInfo)
+        Utils.sleepQuietly(sleepTimeMills)
+        jobInfoResult = client.getJobInfo(jobExecuteResult)
+      }
+      if (!jobInfoResult.isSucceed) {
+        println("Failed to execute job: "+ jobInfoResult.getMessage)
+        throw new Exception(jobInfoResult.getMessage)
+      }
+
+      // 6. Get the job information of the script
+      val jobInfo = client.getJobInfo(jobExecuteResult)
+      // 7. Get the list of result sets (if the user submits multiple SQL at a time, multiple result sets will be generated)
+      val resultSetList = jobInfoResult.getResultSetList(client)
+      println("All result set list:")
+      resultSetList.foreach(println)
+      val oneResultSet = jobInfo.getResultSetList(client).head
+      // 8. Get a specific result set through a result set information
+      val fileContents = client.resultSet(ResultSetAction.builder().setPath(oneResultSet).setUser(jobExecuteResult.getUser).build()).getFileContent
+      println("First fileContents: ")
+      println(fileContents)
+    } catch {
+      case e: Exception => {
+        e.printStackTrace()
+      }
+    }
+    IOUtils.closeQuietly(client)
+  }
+
+}
+
+```
diff --git a/Linkis-Doc-master/en_US/User_Manual/LinkisCli_Usage_document.md b/Linkis-Doc-master/en_US/User_Manual/LinkisCli_Usage_document.md
new file mode 100644
index 0000000..0188013
--- /dev/null
+++ b/Linkis-Doc-master/en_US/User_Manual/LinkisCli_Usage_document.md
@@ -0,0 +1,191 @@
+Linkis-Cli usage documentation
+============
+
+## Introduction
+
+Linkis-Cli is a shell command line program used to submit tasks to Linkis.
+
+## Basic case
+
+You can simply submit a task to Linkis by referring to the example below
+
+The first step is to check whether the default configuration file `linkis-cli.properties` exists in the conf/ directory, and it contains the following configuration:
+
+```properties
+   wds.linkis.client.common.gatewayUrl=http://127.0.0.1:9001
+   wds.linkis.client.common.authStrategy=token
+   wds.linkis.client.common.tokenKey=Validation-Code
+   wds.linkis.client.common.tokenValue=BML-AUTH
+```
+
+The second step is to enter the linkis installation directory and enter the command:
+
+```bash
+    ./bin/linkis-client -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop 
+```
+
+In the third step, you will see the information on the console that the task has been submitted to linkis and started to execute.
+
+Linkis-cli currently only supports synchronous submission, that is, after submitting a task to linkis, it will continue to inquire about the task status and pull task logs until the task ends. If the status is successful at the end of the task, linkis-cli will also actively pull the result set and output it.
+
+
+## How to use
+
+```bash
+   ./bin/linkis-client [parameter] [cli parameter]
+```
+
+## Supported parameter list
+
+* cli parameters
+
+    | Parameter | Description | Data Type | Is Required |
+    | ----------- | -------------------------- | -------- |- --- |
+    | --gwUrl | Manually specify the linkis gateway address | String | No |
+    | --authStg | Specify authentication policy | String | No |
+    | --authKey | Specify authentication key | String | No |
+    | --authVal | Specify authentication value | String | No |
+    | --userConf | Specify the configuration file location | String | No |
+
+* Parameters
+
+    | Parameter | Description | Data Type | Is Required |
+    | ----------- | -------------------------- | -------- |- --- |
+    | -engType | Engine Type | String | Yes |
+    | -runType | Execution Type | String | Yes |
+    | -code | Execution code | String | No |
+    | -codePath | Local execution code file path | String | No |
+    | -smtUsr | Specify the submitting user | String | No |
+    | -pxyUsr | Specify the execution user | String | No |
+    | -creator | Specify creator | String | No |
+    | -scriptPath | scriptPath | String | No |
+    | -outPath | Path of output result set to file | String | No |
+    | -confMap | configuration map | Map | No |
+    | -varMap | variable map for variable substitution | Map | No |
+    | -labelMap | linkis labelMap | Map | No |
+    | -sourceMap | Specify linkis sourceMap | Map | No |
+
+
+## Detailed example
+
+#### One, add cli parameters
+
+Cli parameters can be passed in manually specified, this way will overwrite the conflicting configuration items in the default configuration file
+
+```bash
+    ./bin/linkis-client -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;" -submitUser hadoop -proxyUser hadoop --gwUrl http://127.0.0.1:9001- -authStg token --authKey [tokenKey] --authVal [tokenValue]
+```
+
+#### Two, add engine initial parameters
+
+The initial parameters of the engine can be added through the `-confMap` parameter. Note that the data type of the parameter is Map. The input format of the command line is as follows:
+
+        -confMap key1=val1,key2=val2,...
+        
+For example: the following example sets startup parameters such as the yarn queue for engine startup and the number of spark executors:
+
+```bash
+   ./bin/linkis-client -engineType spark-2.4.3 -codeType sql -confMap wds.linkis.yarnqueue=q02,spark.executor.instances=3 -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  
+```
+
+Of course, these parameters can also be read in a configuration file, we will talk about it later
+
+#### Three, add tags
+
+Labels can be added through the `-labelMap` parameter. Like the `-confMap`, the type of the `-labelMap` parameter is also Map:
+
+```bash
+   /bin/linkis-client -engineType spark-2.4.3 -codeType sql -labelMap labelKey=labelVal -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  
+```
+
+#### Fourth, variable replacement
+
+Linkis-cli variable substitution is realized by `${}` symbol and `-varMap`
+
+```bash
+   ./bin/linkis-client -engineType spark-2.4.3 -codeType sql -code "select count(*) from \${key};" -varMap key=testdb.test  -submitUser hadoop -proxyUser hadoop  
+```
+
+During execution, the sql statement will be replaced with:
+
+```mysql-sql
+   select count(*) from testdb.test
+```  
+        
+Note that the escape character in `'\$'` is to prevent the parameter from being parsed in advance by linux. If `-codePath` specifies the local script mode, the escape character is not required
+
+#### Five, use user configuration
+
+1. linkis-cli supports loading user-defined configuration files, the configuration file path is specified by the `--userConf` parameter, and the configuration file needs to be in the file format of `.properties`
+        
+```bash
+   ./bin/linkis-client -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  --userConf [配置文件路径]
+``` 
+        
+        
+2. Which parameters can be configured?
+
+All parameters can be configured, for example:
+
+cli parameters:
+
+```properties
+   wds.linkis.client.common.gatewayUrl=http://127.0.0.1:9001
+   wds.linkis.client.common.authStrategy=static
+   wds.linkis.client.common.tokenKey=[tokenKey]
+   wds.linkis.client.common.tokenValue=[tokenValue]
+```
+
+parameter:
+
+```properties
+   wds.linkis.client.label.engineType=spark-2.4.3
+   wds.linkis.client.label.codeType=sql
+```
+        
+When the Map class parameters are configured, the format of the key is
+
+        [Map prefix] + [key]
+
+The Map prefix includes:
+
+ - ExecutionMap prefix: wds.linkis.client.exec
+ - sourceMap prefix: wds.linkis.client.source
+ - ConfigurationMap prefix: wds.linkis.client.param.conf
+ - runtimeMap prefix: wds.linkis.client.param.runtime
+ - labelMap prefix: wds.linkis.client.label
+        
+Note:
+
+1. variableMap does not support configuration
+
+2. When there is a conflict between the configured key and the key entered in the command parameter, the priority is as follows:
+
+        Instruction Parameters> Key in Instruction Map Type Parameters> User Configuration> Default Configuration
+        
+Example:
+
+Configure engine startup parameters:
+
+```properties
+   wds.linkis.client.param.conf.spark.executor.instances=3
+   wds.linkis.client.param.conf.wds.linkis.yarnqueue=q02
+```
+        
+Configure labelMap parameters:
+
+```properties
+   wds.linkis.client.label.myLabel=label123
+```
+        
+#### Six, output result set to file
+
+Use the `-outPath` parameter to specify an output directory, linkis-cli will output the result set to a file, and each result set will automatically create a file. The output format is as follows:
+
+        task-[taskId]-result-[idx].txt
+        
+E.g:
+
+        task-906-result-1.txt
+        task-906-result-2.txt
+        task-906-result-3.txt
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/User_Manual/Linkis_Console_User_Manual.md b/Linkis-Doc-master/en_US/User_Manual/Linkis_Console_User_Manual.md
new file mode 100644
index 0000000..1d6704e
--- /dev/null
+++ b/Linkis-Doc-master/en_US/User_Manual/Linkis_Console_User_Manual.md
@@ -0,0 +1,120 @@
+Introduction to Computatoin Governance Console
+==============
+
+> Linkis1.0 has added a new Computatoin Governance Console page, which can provide users with an interactive UI interface for viewing the execution of Linkis tasks, custom parameter configuration, engine health status, resource surplus, etc, and then simplify user development and management efforts.
+
+Structure of Computatoin Governance Console
+==============
+
+> The Computatoin Governance Console is mainly composed of the following functional pages:
+
+-[Global History](#Global_History)
+
+-[Resource Management](#Resource_management)
+
+-[Parameter Configuration](#Parameter_Configuration)
+
+-[Global Variables](#Global_Variables)
+
+-[ECM Management](#ECM_management) (Only visible to linkis computing management console administrators)
+
+-[Microservice Management](#Microservice_management) (Only visible to linkis computing management console administrators)
+
+-[FAQ](#FAQ)
+
+> Global history, resource management, parameter configuration, and global variables are visible to all users, while ECM management and microservice management are only visible to linkis computing management console administrators.
+
+> The administrator of the Linkis computing management desk can configure through the following parameters in linkis.properties:
+
+> `` wds.linkis.governance.station.admin=hadoop (multiple administrator usernames are separated by ‘,’)''
+
+Introduction to the functions and use of Computatoin Governance Console
+========================
+
+Global history
+--------
+
+> ![](Images/Global History Interface.png)
+
+
+> The global history interface provides the user's own linkis task submission record. The execution status of each task can be displayed here, and the reason for the failure of task execution can also be queried by clicking the view button on the left side of the task
+
+> ![./media/image2.png](Images/Global History Query Button.png)
+
+
+> ![./media/image3.png](Images/task execution log of a single task.png)
+
+
+> For linkis computing management console administrators, the administrator can view the historical tasks of all users by clicking the switch administrator view on the page.
+
+> ![./media/image4.png](Images/Administrator View.png)
+
+
+Resource management
+--------
+
+> In the resource management interface, the user can see the status of the engine currently started and the status of resource occupation, and can also stop the engine through the page.
+
+> ![./media/image5.png](Images/Resource Management Interface.png)
+
+
+Parameter configuration
+--------
+
+> The parameter configuration interface provides the function of user-defined parameter management. The user can manage the related configuration of the engine in this interface, and the administrator can add application types and engines here.
+
+> ![./media/image6.png](Images/parameter configuration interface.png)
+
+
+> The user can expand all the configuration information in the directory by clicking on the application type at the top and then select the engine type in the application, modify the configuration information and click "Save" to take effect.
+
+> Edit catalog and new application types are only visible to the administrator. Click the edit button to delete the existing application and engine configuration (note! Deleting the application directly will delete all engine configurations under the application and cannot be restored), or add an engine, or click "New Application" to add a new application type.
+
+> ![./media/image7.png](Images/edit directory.png)
+
+
+> ![./media/image8.png](Images/New application type.png)
+
+
+Global variable
+--------
+
+> In the global variable interface, users can customize variables for code writing, just click the edit button to add parameters.
+
+> ![./media/image9.png](Images/Global Variable Interface.png)
+
+
+ECM management
+-------
+
+> The ECM management interface is used by the administrator to manage the ECM and all engines. This interface can view the status information of the ECM, modify the ECM label information, modify the ECM status information, and query all engine information under each ECM. And only the administrator can see, the administrator's configuration method can be viewed in the second chapter of this article.
+
+> ![./media/image10.png](Images/ECM management interface.png)
+
+
+> Click the edit button to edit the label information of the ECM (only part of the labels are allowed to be edited) and modify the status of the ECM.
+
+> ![./media/image11.png](Images/ECM editing interface.png)
+
+
+> Click the instance name of the ECM to view all engine information under the ECM.
+
+> ![](Images/Click the instance name to view engine information.png)
+
+> ![](All engine information under Images/ECM.png)
+
+> Similarly, you can stop the engine on this interface, and edit the label information of the engine.
+
+Microservice management
+----------
+
+> The microservice management interface can view all microservice information under Linkis, and this interface is only visible to the administrator. Linkis's own microservices can be viewed by clicking on the Eureka registration center. The microservices associated with linkis will be listed directly on this interface.
+
+> ![](Images/microservice management interface.png)
+
+> ![](Images/Eureka registration center.png)
+
+common problem
+--------
+
+> To be added.
diff --git a/Linkis-Doc-master/en_US/User_Manual/README.md b/Linkis-Doc-master/en_US/User_Manual/README.md
new file mode 100644
index 0000000..442a32a
--- /dev/null
+++ b/Linkis-Doc-master/en_US/User_Manual/README.md
@@ -0,0 +1,8 @@
+# Overview
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis considered the scalability of the access method at the beginning of the design. For different access scenarios, Linkis provides front-end access and SDK access. HTTP and WebSocket interfaces are also provided on the basis of front-end interfaces. If you are interested in accessing and using Linkis, you can refer to the following documents:
+
+- [How to use Links](How_To_Use_Linkis.md)
+- [Linkis Management Console User Manual](Linkis_Console_User_Manual.md)
+- [Linkis1.0 User Manual](Linkis1.0_User_Manual.md)
+- [Linkis-Cli Usage Document](LinkisCli_Usage_document.md)
diff --git "a/Linkis-Doc-master/zh_CN/API_Documentations/Linkis\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214RestAPI\346\226\207\346\241\243.md" "b/Linkis-Doc-master/zh_CN/API_Documentations/Linkis\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214RestAPI\346\226\207\346\241\243.md"
new file mode 100644
index 0000000..6e5493c
--- /dev/null
+++ "b/Linkis-Doc-master/zh_CN/API_Documentations/Linkis\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214RestAPI\346\226\207\346\241\243.md"
@@ -0,0 +1,171 @@
+# Linkis 任务提交执行Rest API文档
+
+- Linkis Restful接口的返回,都遵循以下的标准返回格式:
+
+```json
+{
+ "method": "",
+ "status": 0,
+ "message": "",
+ "data": {}
+}
+```
+
+**约定**:
+
+ - method:返回请求的Restful API URI,主要是 WebSocket 模式需要使用。
+ - status:返回状态信息,其中:-1表示没有登录,0表示成功,1表示错误,2表示验证失败,3表示没该接口的访问权限。
+ - data:返回具体的数据。
+ - message:返回请求的提示信息。如果status非0时,message返回的是错误信息,其中data有可能存在stack字段,返回具体的堆栈信息。 
+ 
+更多关于 Linkis Restful 接口的规范,请参考:[Linkis Restful 接口规范](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Development_Documents/Development_Specification/API.md)
+
+### 1).提交执行
+
+- 接口 `/api/rest_j/v1/entrance/execute`
+
+- 提交方式 `POST`
+
+```json
+{
+    "executeApplicationName": "hive", //引擎类型
+    "requestApplicationName": "dss", //客户端服务类型
+    "executionCode": "show tables",
+    "params": {"variable": {}, "configuration": {}},
+    "runType": "hql", //运行的脚本类型
+   "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
+}
+```
+
+- 接口 `/api/rest_j/v1/entrance/submit`
+
+- 提交方式 `POST`
+
+```json
+{
+    "executionContent": {"code": "show tables", "runType":  "sql"},
+    "params": {"variable": {}, "configuration": {}},
+    "source":  {"scriptPath": "file:///mnt/bdp/hadoop/1.hql"},
+    "labels": {
+        "engineType": "spark-2.4.3",
+        "userCreator": "hadoop-IDE"
+    }
+}
+```
+
+
+- 返回示例
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/execute",
+ "status": 0,
+ "message": "请求执行成功",
+ "data": {
+   "execID": "030418IDEhivebdpdwc010004:10087IDE_hadoop_21",
+   "taskID": "123"  
+ }
+}
+```
+
+- execID是用户任务提交到 Linkis 之后,为该任务生成的唯一标识执行ID,为 String 类型,这个ID只在任务运行时有用,类似PID的概念。ExecID 的设计为`(requestApplicationName长度)(executeAppName长度)(Instance长度)${requestApplicationName}${executeApplicationName}${entranceInstance信息ip+port}${requestApplicationName}_${umUser}_${index}`
+
+- taskID 是表示用户提交task的唯一ID,这个ID由数据库自增生成,为 Long 类型
+
+
+### 2).获取状态
+
+- 接口 `/api/rest_j/v1/entrance/${execID}/status`
+
+- 提交方式 `GET`
+
+- 返回示例
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/{execID}/status",
+ "status": 0,
+ "message": "获取状态成功",
+ "data": {
+   "execID": "${execID}",
+   "status": "Running"
+ }
+}
+```
+
+### 3).获取日志
+
+- 接口 `/api/rest_j/v1/entrance/${execID}/log?fromLine=${fromLine}&size=${size}`
+
+- 提交方式 `GET`
+
+- 请求参数fromLine是指从第几行开始获取,size是指该次请求获取几行日志
+
+- 返回示例,其中返回的fromLine需要作为下次请求该接口的参数
+
+```json
+{
+  "method": "/api/rest_j/v1/entrance/${execID}/log",
+  "status": 0,
+  "message": "返回日志信息",
+  "data": {
+    "execID": "${execID}",
+	"log": ["error日志","warn日志","info日志", "all日志"],
+	"fromLine": 56
+  }
+}
+```
+
+### 4).获取进度
+
+- 接口 `/api/rest_j/v1/entrance/${execID}/progress`
+
+- 提交方式 `GET`<br>
+
+- 返回示例
+
+```json
+{
+  "method": "/api/rest_j/v1/entrance/{execID}/progress",
+  "status": 0,
+  "message": "返回进度信息",
+  "data": {
+    "execID": "${execID}",
+	"progress": 0.2,
+	"progressInfo": [
+		{
+			"id": "job-1",
+			"succeedTasks": 2,
+			"failedTasks": 0,
+			"runningTasks": 5,
+			"totalTasks": 10
+		},
+		{
+			"id": "job-2",
+			"succeedTasks": 5,
+			"failedTasks": 0,
+			"runningTasks": 5,
+			"totalTasks": 10
+		}
+	]
+  }
+}
+```
+
+### 5).kill任务
+
+- 接口 `/api/rest_j/v1/entrance/${execID}/kill`
+
+- 提交方式 `GET`
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/{execID}/kill",
+ "status": 0,
+ "message": "OK",
+ "data": {
+   "execID":"${execID}"
+  }
+}
+```
+
diff --git a/Linkis-Doc-master/zh_CN/API_Documentations/Login_API.md b/Linkis-Doc-master/zh_CN/API_Documentations/Login_API.md
new file mode 100644
index 0000000..01c896f
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/API_Documentations/Login_API.md
@@ -0,0 +1,131 @@
+# 登录文档
+
+## 1.对接LDAP服务
+
+进入/conf目录,执行命令:
+
+```bash
+    vim linkis-mg-gateway.properties
+```    
+
+添加LDAP相关配置:
+```bash
+wds.linkis.ldap.proxy.url=ldap://127.0.0.1:389/ # 您的LDAP服务URL
+wds.linkis.ldap.proxy.baseDN=dc=webank,dc=com # 您的LDAP服务的配置    
+```    
+    
+## 2.如何打开测试模式,实现免登录
+
+进入/conf目录,执行命令:
+
+```bash
+     vim linkis-mg-gateway.properties
+```
+    
+    
+将测试模式打开,参数如下:
+
+```shell
+    wds.linkis.test.mode=true   # 打开测试模式
+    wds.linkis.test.user=hadoop  # 指定测试模式下,所有请求都代理给哪个用户
+```
+
+## 3.登录接口汇总
+
+我们提供以下几个与登录相关的接口:
+
+ - [登录](#1登录)
+
+ - [登出](#2登出)
+
+ - [心跳](#3心跳)
+ 
+
+## 4.接口详解
+
+- Linkis Restful接口的返回,都遵循以下的标准返回格式:
+
+```json
+{
+ "method": "",
+ "status": 0,
+ "message": "",
+ "data": {}
+}
+```
+
+**约定**:
+
+ - method:返回请求的Restful API URI,主要是 WebSocket 模式需要使用。
+ - status:返回状态信息,其中:-1表示没有登录,0表示成功,1表示错误,2表示验证失败,3表示没该接口的访问权限。
+ - data:返回具体的数据。
+ - message:返回请求的提示信息。如果status非0时,message返回的是错误信息,其中data有可能存在stack字段,返回具体的堆栈信息。 
+ 
+更多关于 Linkis Restful 接口的规范,请参考:[Linkis Restful 接口规范](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Development_Documents/Development_Specification/API.md)
+
+### 1).登录
+
+- 接口 `/api/rest_j/v1/user/login`
+
+- 提交方式 `POST`
+
+```json
+      {
+        "userName": "",
+        "password": ""
+      }
+```
+
+- 返回示例
+
+```json
+    {
+        "method": null,
+        "status": 0,
+        "message": "login successful(登录成功)!",
+        "data": {
+            "isAdmin": false,
+            "userName": ""
+        }
+     }
+```
+
+其中:
+
+ - isAdmin: Linkis只有admin用户和非admin用户,admin用户的唯一特权,就是支持在Linkis管理台查看所有用户的历史任务。
+
+### 2).登出
+
+- 接口 `/api/rest_j/v1/user/logout`
+
+- 提交方式 `POST`
+
+  无参数
+
+- 返回示例
+
+```json
+    {
+        "method": "/api/rest_j/v1/user/logout",
+        "status": 0,
+        "message": "退出登录成功!"
+    }
+```
+
+### 3).心跳
+
+- 接口 `/api/rest_j/v1/user/heartbeat`
+
+- 提交方式 `POST`
+
+  无参数
+
+- 返回示例
+
+```json
+    {
+         "method": "/api/rest_j/v1/user/heartbeat",
+         "status": 0,
+         "message": "维系心跳成功!"
+    }
+```
diff --git a/Linkis-Doc-master/zh_CN/API_Documentations/README.md b/Linkis-Doc-master/zh_CN/API_Documentations/README.md
new file mode 100644
index 0000000..9f952b6
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/API_Documentations/README.md
@@ -0,0 +1,8 @@
+## 1. 文档说明
+Linkis1.0 在Linkix0.x版本的基础上进行了重构优化,同时也兼容了0.x的接口,但是为了防止在使用1.0版本时存在兼容性问题,需要您仔细阅读以下文档:
+
+1. 使用Linkis1.0定制化开发时,需要使用到Linkis的权限认证接口,请仔细阅读 [登录API文档](Login_API.md)。
+
+2. Linkis1.0提供JDBC的接口,需要使用JDBC的方式接入Linkis,请仔细阅读[任务提交执行JDBC API文档](任务提交执行JDBC_API文档.md)。
+
+3. Linkis1.0提供了Rest接口,如果需要在Linkis的基础上开发上层应用,请仔细阅读[任务提交执行Rest API文档](Linkis任务提交执行RestAPI文档.md)。
\ No newline at end of file
diff --git "a/Linkis-Doc-master/zh_CN/API_Documentations/\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214JDBC_API\346\226\207\346\241\243.md" "b/Linkis-Doc-master/zh_CN/API_Documentations/\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214JDBC_API\346\226\207\346\241\243.md"
new file mode 100644
index 0000000..1e365be
--- /dev/null
+++ "b/Linkis-Doc-master/zh_CN/API_Documentations/\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214JDBC_API\346\226\207\346\241\243.md"
@@ -0,0 +1,46 @@
+# 任务提交执行JDBC API文档
+
+### 一、引入依赖模块:
+第一种方式在pom里面依赖JDBC模块:
+```xml
+<dependency>
+    <groupId>com.webank.wedatasphere.linkis</groupId>
+    <artifactId>linkis-ujes-jdbc</artifactId>
+    <version>${linkis.version}</version>
+ </dependency>
+```
+**注意:** 该模块还没有deploy到中央仓库,需要在ujes/jdbc目录里面执行`mvn install -Dmaven.test.skip=true`进行本地安装。
+
+**第二种方式通过打包和编译:**
+1. 在Linkis项目中进入到ujes/jdbc目录然后在终端输入指令进行打包`mvn assembly:assembly -Dmaven.test.skip=true`
+该打包指令会跳过单元测试的运行和测试代码的编译,并将JDBC模块需要的依赖一并打包进Jar包之中。
+2. 打包完成后在JDBC的target目录下会生成两个Jar包,Jar包名称中包含dependencies字样的那个就是我们需要的Jar包
+
+### 二、建立测试类:
+建立Java的测试类LinkisClientImplTestJ,具体接口含义可以见注释:
+```java
+ public static void main(String[] args) throws SQLException, ClassNotFoundException {
+
+        //1. 加载驱动类:com.webank.wedatasphere.linkis.ujes.jdbc.UJESSQLDriver
+        Class.forName("com.webank.wedatasphere.linkis.ujes.jdbc.UJESSQLDriver");
+
+        //2. 获得连接:jdbc:linkis://gatewayIP:gatewayPort   帐号和密码对应前端的帐号密码
+        Connection connection =  DriverManager.getConnection("jdbc:linkis://127.0.0.1:9001","username","password");
+
+        //3. 创建statement 和执行查询
+        Statement st= connection.createStatement();
+        ResultSet rs=st.executeQuery("show tables");
+        //4.处理数据库的返回结果(使用ResultSet类)
+        while (rs.next()) {
+            ResultSetMetaData metaData = rs.getMetaData();
+            for (int i = 1; i <= metaData.getColumnCount(); i++) {
+                System.out.print(metaData.getColumnName(i) + ":" +metaData.getColumnTypeName(i)+": "+ rs.getObject(i) + "    ");
+            }
+            System.out.println();
+        }
+        //关闭资源
+        rs.close();
+        st.close();
+        connection.close();
+    }
+```
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/messagescheduler.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/messagescheduler.md
new file mode 100644
index 0000000..4ed47a9
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/messagescheduler.md
@@ -0,0 +1,15 @@
+# Linkis-Message-Scheduler
+## 1. 概述
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis-RPC可以实现微服务之间的通信,为了简化RPC的使用方式,Linkis提供Message-Scheduler模块,通过如@Receiver注解的方式的解析识别与调用,同时,也统一了RPC和Restful接口的使用方式,具有更好的可拓展性。
+## 2. 架构说明
+## 2.1. 架构设计图
+![模块设计图](./../../Images/Architecture/Commons/linkis-message-scheduler.png)
+## 2.2. 模块说明
+* ServiceParser:解析Service模块的(Object)对象,同时把@Receiver注解的方法封装到ServiceMethod对象中。
+* ServiceRegistry:注册对应的Service模块,将Service解析后的ServiceMethod存储在Map容器中。
+* ImplicitParser:将Implicit模块的对象进行解析,使用@Implicit标注的方法会被封装到ImplicitMethod对象中。
+* ImplicitRegistry:注册对应的Implicit模块,将解析后的ImplicitMethod存储在一个Map容器中。
+* Converter:启动扫描RequestMethod的非接口非抽象的子类,并存储在Map中,解析Restful并匹配相关的RequestProtocol。
+* Publisher:实现发布调度功能,在Registry中找出匹配RequestProtocol的ServiceMethod,并封装为Job进行提交调度。
+* Scheduler:调度实现,使用Linkis-Sceduler执行Job,返回MessageJob对象。
+* TxManager:完成事务管理,对Job执行进行事务管理,在Job执行结束后判断是否进行Commit或者Rollback。
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/rpc.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/rpc.md
new file mode 100644
index 0000000..c89c578
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/rpc.md
@@ -0,0 +1,17 @@
+# Linkis-RPC
+## 1. 概述
+基于Feign的微服务之间HTTP接口的调用,只能满足简单的A微服务实例根据简单的规则随机选择B微服务之中的某个服务实例,而这个B微服务实例如果想异步回传信息给调用方,是根本无法实现的。
+同时,由于Feign只支持简单的服务选取规则,无法做到将请求转发给指定的微服务实例,无法做到将一个请求广播给接收方微服务的所有实例。
+
+## 2. 架构说明
+## 2.1. 架构设计图
+![Linkis RPC架构图](./../../Images/Architecture/Commons/linkis-rpc.png)
+## 2.2. 模块说明
+主要模块的功能介绍如下:
+* Eureka:服务注册中心,用户管理服务,服务发现。
+* Sender发送器:服务请求接口,发送端使用Sender向接收端请求服务。
+* Receiver接收器:服务请求接收相应接口,接收端通过该接口响应服务。
+* Interceptor拦截器:Sender发送器会将使用者的请求传递给拦截器。拦截器拦截请求,对请求做额外的功能性处理,分别是广播拦截器用于对请求广播操作、重试拦截器用于对失败请求重试处理、缓存拦截器用于简单不变的请求读取缓存处理、和提供默认实现的默认拦截器。
+* Decoder,Encoder:用于请求的编码和解码。
+* Feign:是一个http请求调用的轻量级框架,声明式WebService客户端程序,用于Linkis-RPC底层通信。
+* Listener:监听模块,主要用于监听广播请求。
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md
new file mode 100644
index 0000000..45389b1
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md
@@ -0,0 +1,98 @@
+EngineConn架构设计
+==================
+
+EngineConn:引擎连接器,为其他微服务模块提供统一配置管理、上下文服务、物理库、数据源管理、微服务管理和历史任务查询等功能的模块。
+
+一、EngineConn架构图
+
+![EngineConn](../../../Images/Architecture/EngineConn/engineconn-01.png)
+
+二级模块介绍:
+==============
+
+linkis-computation-engineconn交互式引擎连接器
+---------------------------------------------
+
+提供交互式计算任务的能力。
+
+| 核心类               | 核心功能                                                   |
+|----------------------|------------------------------------------------------------|
+| EngineConnTask       | 定义了提交给EngineConn的交互式计算任务                     |
+| ComputationExecutor  | 定义了交互式Executor,具备状态查询、任务kill等交互式能力。 |
+| TaskExecutionService | 提供对交互式计算任务的管理功能                             |
+
+linkis-engineconn-common引擎连接器的通用模块
+--------------------------------------------
+
+1.  定义了引擎连接器中最基础的实体类和接口。EngineConn是用于创建一个底层计算存储引擎的连接会话Session,包含引擎与具体集群的会话信息,是与具体引擎通信的client。
+
+| 核心Service           | 核心功能                                                             |
+|-----------------------|----------------------------------------------------------------------|
+| EngineCreationContext | 包含了EngineConn在启动期间的上下文信息                               |
+| EngineConn            | 包含了EngineConn的具体信息,如类型、与层计算存储引擎的具体连接信息等 |
+| EngineExecution       | 提供Executor的创建逻辑                                               |
+| EngineConnHook        | 定义引擎启动各个阶段前后的操作                                       |
+
+linkis-engineconn-core引擎连接器的核心逻辑
+------------------------------------------
+
+定义了EngineConn的核心逻辑涉及的接口。
+
+| 核心类            | 核心功能                           |
+|-------------------|------------------------------------|
+| EngineConnManager | 提供创建、获取EngineConn的相关接口 |
+| ExecutorManager   | 提供创建、获取Executor的相关接口   |
+| ShutdownHook      | 定义引擎关闭阶段的操作             |
+
+linkis-engineconn-launch引擎连接器启动模块
+------------------------------------------
+
+定义了如何启动EngineConn的逻辑。
+
+| 核心类           | 核心功能                 |
+|------------------|--------------------------|
+| EngineConnServer | EngineConn微服务的启动类 |
+
+linkis-executor-core执行器的核心逻辑
+------------------------------------
+
+>   定义了执行器相关的核心类。执行器是真正的计算场景执行器,负责将用户代码提交给EngineConn。
+
+| 核心类                     | 核心功能                                                   |
+|----------------------------|------------------------------------------------------------|
+| Executor                   | 是实际的计算逻辑执行单元,并提供对引擎各种能力的顶层抽象。 |
+| EngineConnAsyncEvent       | 定义了EngineConn相关的异步事件                             |
+| EngineConnSyncEvent        | 定义了EngineConn相关的同步事件                             |
+| EngineConnAsyncListener    | 定义了EngineConn相关异步事件监听器                         |
+| EngineConnSyncListener     | 定义了EngineConn相关同步事件监听器                         |
+| EngineConnAsyncListenerBus | 定义了EngineConn异步事件的监听器总线                       |
+| EngineConnSyncListenerBus  | 定义了EngineConn同步事件的监听器总线                       |
+| ExecutorListenerBusContext | 定义了EngineConn事件监听器的上下文                         |
+| LabelService               | 提供标签上报功能                                           |
+| ManagerService             | 提供与LinkisManager进行信息传递的功能                      |
+
+linkis-callback-service回调逻辑
+-------------------------------
+
+| 核心类             | 核心功能                 |
+|--------------------|--------------------------|
+| EngineConnCallback | 定义EngineConn的回调逻辑 |
+
+linkis-accessible-executor能够被访问的执行器
+--------------------------------------------
+
+能够被访问的Executor。可以通过RPC请求与它交互,从而获取它的状态、负载、并发等基础指标Metrics数据。
+
+| 核心类                   | 核心功能                                        |
+|--------------------------|-------------------------------------------------|
+| LogCache                 | 提供日志缓存的功能                              |
+| AccessibleExecutor       | 能够被访问的Executor,可以通过RPC请求与它交互。 |
+| NodeHealthyInfoManager   | 管理Executor的健康信息                          |
+| NodeHeartbeatMsgManager  | 管理Executor的心跳信息                          |
+| NodeOverLoadInfoManager  | 管理Executor的负载信息                          |
+| Listener                 | 提供与Executor相关的事件以及对应的监听器定义    |
+| EngineConnTimedLock      | 定义Executor级别的锁                            |
+| AccessibleService        | 提供Executor的启停、状态获取功能                |
+| ExecutorHeartbeatService | 提供Executor的心跳相关功能                      |
+| LockService              | 提供锁管理功能                                  |
+| LogService               | 提供日志管理功能                                |
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM\346\236\266\346\236\204\345\233\276.png" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM\346\236\266\346\236\204\345\233\276.png"
new file mode 100644
index 0000000..cc83842
Binary files /dev/null and "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM\346\236\266\346\236\204\345\233\276.png" differ
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/\345\210\233\345\273\272EngineConn\350\257\267\346\261\202\346\265\201\347\250\213.png" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/\345\210\233\345\273\272EngineConn\350\257\267\346\261\202\346\265\201\347\250\213.png"
new file mode 100644
index 0000000..303f37a
Binary files /dev/null and "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/\345\210\233\345\273\272EngineConn\350\257\267\346\261\202\346\265\201\347\250\213.png" differ
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md
new file mode 100644
index 0000000..2fa0aef
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md
@@ -0,0 +1,49 @@
+EngineConnManager架构设计
+-------------------------
+
+EngineConnManager(ECM):EngineConn的管理器,提供引擎的生命周期管理,同时向RM汇报负载信息和自身的健康状况。
+
+### 一、ECM架构
+
+![](Images/ECM架构图.png)
+
+### 二、二级模块介绍
+
+**Linkis-engineconn-linux-launch**
+
+引擎启动器,核心类为LinuxProcessEngineConnLauch,用于提供执行命令的指令。
+
+**Linkis-engineconn-manager-core**
+
+ECM的核心模块,包含ECM健康上报、EngineConn健康上报功能的顶层接口,定义了ECM服务的相关指标,以及构造EngineConn进程的核心方法。
+
+| 核心顶层接口/类     | 核心功能                                 |
+|---------------------|------------------------------------------|
+| EngineConn          | 定义了EngineConn的属性,包含的方法和参数 |
+| EngineConnLaunch    | 定义了EngineConn的启动方法和停止方法     |
+| ECMEvent            | 定义了ECM相关事件                        |
+| ECMEventListener    | 定义了ECM相关事件监听器                  |
+| ECMEventListenerBus | 定义了ECM的监听器总线                    |
+| ECMMetrics          | 定义了ECM的指标信息                      |
+| ECMHealthReport     | 定义了ECM的健康上报信息                  |
+| NodeHealthReport    | 定义了节点的健康上报信息                 |
+
+**Linkis-engineconn-manager-server**
+
+ECM的服务端,定义了ECM健康信息处理服务、ECM指标信息处理服务、ECM注册服务、EngineConn启动服务、EngineConn停止服务、EngineConn回调服务等顶层接口和实现类,主要用于ECM对自己和EngineConn的生命周期管理以及健康信息上报、发送心跳等。
+
+模块中的核心Service和功能简介如下:
+
+| 核心service                     | 核心功能                                        |
+|---------------------------------|-------------------------------------------------|
+| EngineConnLaunchService         | 包含生成EngineConn和启动进程的核心方法          |
+| BmlResourceLocallizationService | 用于将BML的引擎相关资源下载并生成本地化文件目录 |
+| ECMHealthService                | 向AM定时上报自身的健康心跳                      |
+| ECMMetricsService               | 向AM定时上报自身的指标状况                      |
+| EngineConnKillSerivce           | 提供停止引擎的相关功能                          |
+| EngineConnListService           | 提供缓存和管理引擎的相关功能                    |
+| EngineConnCallBackService       | 提供回调引擎的功能                              |
+
+ECM构建EngineConn启动流程:
+
+![](Images/创建EngineConn请求流程.png)
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md
new file mode 100644
index 0000000..798f535
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md
@@ -0,0 +1,71 @@
+EngineConnPlugin(ECP)架构设计
+===============================
+
+引擎连接器插件是一种能够动态加载引擎连接器并减少版本冲突发生的实现,具有方便扩展、快速刷新、选择加载的特性。为了能让开发用户自由扩展Linkis的Engine引擎,并动态加载引擎依赖避免版本冲突,设计研发了EngineConnPlugin,允许以实现既定的插件化接口的方式引入新引擎到计算中间件的执行生命周期里,
+插件化接口对引擎的定义做了拆解,包括参数初始化、分配引擎资源,构建引擎连接以及设定引擎默认标签。
+
+一、ECP架构图
+
+![](../../../Images/Architecture/linkis-engineConnPlugin-01.png)
+
+二级模块介绍:
+==============
+
+EngineConn-Plugin-Server
+------------------------
+
+引擎连接器插件服务是对外提供注册插件、管理插件,以及插件资源构建的入口服务。成功注册加载的引擎插件会包含资源分配和启动参数配置的逻辑,在引擎初始化过程中,EngineConn
+Manager等其他服务通过RPC请求调用Plugin Server里对应插件的逻辑。
+
+| 核心类                           | 核心功能                              |
+|----------------------------------|---------------------------------------|
+| EngineConnLaunchService          | 负责构建引擎连接器启动请求            |
+| EngineConnResourceFactoryService | 负责生成引擎资源                      |
+| EngineConnResourceService        | 负责从BML下载引擎连接器使用的资源文件 |
+
+
+EngineConn-Plugin-Loader 引擎连接器插件加载器
+---------------------------------------
+
+引擎连接器插件加载器是用来根据请求参数动态加载引擎连接器插件的加载器,并具有缓存的特性。具体加载流程主要由两部分组成:1)插件资源例如主程序包和程序依赖包等加载到本地(未开放)。2)插件资源从本地动态加载入服务进程环境中,例如通过类加载器加载入JVM虚拟机。
+
+| 核心类                          | 核心功能                                     |
+|---------------------------------|----------------------------------------------|
+| EngineConnPluginsResourceLoader | 加载引擎连接器插件资源                       |
+| EngineConnPluginsLoader         | 加载引擎连接器插件实例,或者从缓存加载已有的 |
+| EngineConnPluginClassLoader     | 动态从jar中实例化引擎连接器实例              |
+
+EngineConn-Plugin-Cache 引擎插件缓存模组
+----------------------------------------
+
+引擎连接器插件缓存是专门用来缓存已经加载的引擎连接器的缓存服务,并支持读取、更新、移除的能力。已经加载进服务进程的插件会被连同其类加载器一起缓存起来,避免多次加载影响效率;同时缓存模组会定时通知加载器去更新插件资源,如果发现有变动,会重新加载并自动刷新缓存。
+
+| 核心类                      | 核心功能                     |
+|-----------------------------|------------------------------|
+| EngineConnPluginCache       | 缓存已经加载的引擎连接器实例 |
+| RefreshPluginCacheContainer | 定时刷新缓存的引擎连接器     |
+
+EngineConn-Plugin-Core:引擎连接器插件核心模组
+---------------------------------------------
+
+引擎连接器插件核心模块是引擎连接器插件的核心模块。包含引擎插件基本功能实现,如引擎连接器启动命令构建,引擎资源工厂构建和引擎连接器插件核心接口实现。
+
+| 核心类                  | 核心功能                                                 |
+|-------------------------|----------------------------------------------------------|
+| EngineConnLaunchBuilder | 构建引擎连接器启动请求                                   |
+| EngineConnFactory       | 创建引擎连接器                                           |
+| EngineConnPlugin        | 引擎连接器插件实现接口,包括资源,命令,实例的构建方法。 |
+| EngineResourceFactory   | 引擎资源的创建工厂                                       |
+
+EngineConn-Plugins:引擎连接插件集合
+-----------------------------------
+
+引擎连接插件集合是用来放置已经基于我们定义的插件接口实现的默认引擎连接器插件库。提供了默认引擎连接器实现,如jdbc、spark、python、shell等。用户可以基于自己的需求参考已经实现的案例,实现更多的引擎连接器。
+
+| 核心类              | 核心功能         |
+|---------------------|------------------|
+| engineplugin-jdbc   | jdbc引擎连接器   |
+| engineplugin-shell  | shell引擎连接器  |
+| engineplugin-spark  | spark引擎连接器  |
+| engineplugin-python | python引擎连接器 |
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/Entrance/Entrance.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/Entrance/Entrance.md
new file mode 100644
index 0000000..38d3e56
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/Entrance/Entrance.md
@@ -0,0 +1,26 @@
+Entrance架构设计
+================
+
+Links任务提交入口是用来负责计算任务的接收、调度、转发执行请求、生命周期管理的服务,并且能把计算结果、日志、进度返回给调用方,是从Linkis0.X的Entrance拆分出来的原生能力。
+
+一、Entrance架构图
+
+![](../../../Images/Architecture/linkis-entrance-01.png)
+
+**二级模块介绍:**
+
+EntranceServer
+--------------
+
+EntranceServer计算任务提交入口服务是Entrance的核心服务,负责Linkis执行任务的接收、调度、执行状态跟踪、作业生命周期管理等。主要实现了把任务执行请求转成可调度的Job,调度、申请Executor执行,Job状态管理,结果集管理,日志管理等。
+
+| 核心类                  | 核心功能                                                                                                                                           |
+|-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------|
+| EntranceInterceptor     | Entrance拦截器用来对传入参数task进行信息的补充,使得这个task的内容更加完整, 补充的信息包括: 数据库信息补充、自定义变量替换、代码检查、limit限制等 |
+| EntranceParser          | Entrance解析器用来把请求参数Map解析成Task,也可以将Task转成可调度的Job,或者把Job转成可存储的Task。                                                  |
+| EntranceExecutorManager | Entrance执行器管理为EntranceJob的执行创建Executor,并维护Job和Executor的关系,且支持Job请求的标签能力                                               |
+| PersistenceManager      | 持久化管理负责作业相关的持久化操作,如结果集路径、作业状态变化、进度等存储到数据库。                                                               |
+| ResultSetEngine         | 结果集引擎负责作业运行后的结果集存储,以文件的形式保存到HDFS或者本地存储目录。                                                                     |
+| LogManager              | 日志管理负责作业日志的存储并对接日志错误码管理。                                                                                                   |
+| Scheduler               | 作业调度器负责所有Job的调度执行,主要通过调度作业队列实现。                                                                                        |
+|                         |                                                                                                                                                    |
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisClient/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisClient/README.md
new file mode 100644
index 0000000..7d36f0e
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisClient/README.md
@@ -0,0 +1,35 @@
+## Linkis-Client架构设计
+
+为用户提供向Linkis提交执行任务的轻量级客户端。
+
+#### Linkis-Client架构图
+
+![img](./../../../Images/Architecture/linkis-client-01.png)
+
+
+
+#### 二级模块介绍
+
+##### Linkis-Computation-Client
+
+以SDK的形式为用户提供向Linkis提交执行任务的接口。
+
+| 核心类     | 核心功能                                         |
+| ---------- | ------------------------------------------------ |
+| Action     | 定义了请求的属性,包含的方法和参数               |
+| Result     | 定义了返回结果的属性,包含的方法和参数           |
+| UJESClient | 负责请求的提交,执行,状态、结果和相关参数的获取 |
+
+ 
+
+#####  Linkis-Cli
+
+以shell命令端的形式为用户提供向Linkis提交执行任务的方式。
+
+| 核心类      | 核心功能                                                     |
+| ----------- | ------------------------------------------------------------ |
+| Common      | 定义了指令模板父类、指令解析实体类、任务提交执行各环节的父类和接口 |
+| Core        | 负责解析输入、任务执行和定义输出方式                         |
+| Application | 调用linkis-computation-client执行任务,并实时拉取日志和最终结果 |
+
+ 
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md
new file mode 100644
index 0000000..c8fba23
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md
@@ -0,0 +1,45 @@
+## 背景
+针对旧版本Linkis的Entrance模块负责太多的职责,对Engine的管理能力较弱,且不易于后续的扩展,新抽出了AppManager模块,完成
+以下职责:
+1. 新增AM模块将Entrance之前做的管理Engine的功能移动到AM模块
+2. AM需要支持操作Engine,包括:新增、复用、回收、预热、切换等功能
+3. 需要对接Manager模块对外提供Engine的管理功能:包括Engine状态维护、引擎列表维护、引擎信息等
+4. AM需要管理EM服务,需要完成EM的注册并将资源注册转发给RM进行EM的资源注册
+5. AM需要对接Label模块,包括EM/Engine的增删需要通知标签管理器进行标签更新
+6. AM另外需要对接标签模块进行标签解析,并需要通过一系列标签获取一些列打好分的serverInstance列表(EM和Engine怎么区分,1、标签完全不一样)
+7. 需要对外提供基础接口:包括引擎和引擎管理器的增删改,提供metric查询等
+
+## 架构图
+
+![](../../../Images/Architecture/AppManager-03.png)
+
+如上图所示:AM在LinkisMaster中属于AppManager模块,作为一个Service提供服务
+
+新引擎申请流程图:
+![](../../../Images/Architecture/AppManager-02.png)
+
+
+从上面的引擎生命周期流程图可知,Entrance已经不在做Engine的管理工作,engine的启动和管理都由AM控制。
+
+## 架构说明:
+
+AppManager主要包含了引擎服务和EM服务:
+引擎服务包含了所有和引擎EngineConn相关的操作,如引擎创建、引擎复用、引擎切换、引擎回收、引擎停止、引擎销毁等。
+EM服务负责所有EngineConnManager的信息管理,可以在线上对ECM进行服务管理,包括标签修改,暂停ECM服务,获取ECM实例信息,获取ECM运行的引擎信息,kill掉ECM操作,还可以根据EM Node的信息查询所有的EngineNode,也支持按用户查找,保存了EM Node的负载信息、节点健康信息、资源使用信息等。
+新的EngineConnManager和EngineConn都支持标签管理,引擎的类型也增加了离线、流式、交互式支持。
+
+引擎创建:专门负责LinkisManager服务的新建引擎功能,引擎启动模块完全负责一个新引擎的创建,包括获取ECM标签集合、资源申请、获得引擎启动命令,通知ECM新建引擎,更新引擎列表等。
+CreateEngienRequest->RPC/Rest -> MasterEventHandler ->CreateEngineService ->
+->LabelContext/EnginePlugin/RMResourcevice->(RcycleEngineService)EngineNodeManager->EMNodeManager->sender.ask(EngineLaunchRequest)->EngineManager服务->EngineNodeManager->EngineLocker->Engine->EngineNodeManager->EngineFactory=&gt;EngineService=&gt;ServerInstance
+在创建引擎是存在和RM交互的部分,EnginePlugin应该需要通过Lables返回具体的资源类型,然后AM向RM发送资源请求
+
+引擎复用:为了减少引擎启动所耗费的时间和资源,引擎使用必须优先考虑复用原则,复用一般是指复用用户已经创建好的引擎,引擎复用模块负责提供可复用引擎集合,选举并锁定引擎后开始使用,或者返回没有可以复用的引擎。
+ReuseEngienRequest->RPC/Rest -> MasterEventHandler ->ReuseEngineService ->
+->abelContext->EngineNodeManager->EngineSelector->EngineLocker->Engine->EngineNodeManager->EngineReuser->EngineService=&gt;ServerInstance
+
+引擎切换:主要是指对已有引擎进行标签切换,例如创建引擎的时候是由Creator1创建的,现在可以通过引擎切换改成Creator2。这个时候就可以允许当前引擎接收标签为Creator2的任务了。
+SwitchEngienRequest->RPC/Rest -> MasterEventHandler ->SwitchEngineService ->LabelContext/EnginePlugin/RMResourcevice->EngineNodeManager->EngineLocker->Engine->EngineNodeManager->EngineReuser->EngineService=&gt;ServerInstance
+
+引擎管理器:引擎管理负责管理所有引擎的基本信息、元数据信息
+
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md
new file mode 100644
index 0000000..7c21f08
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md
@@ -0,0 +1,40 @@
+## LabelManager 架构设计
+
+#### 简述
+LabelManager是Linkis中对上层应用提供标签服务的功能模组,运用标签技术管理集群资源分配、服务节点选举、用户权限匹配以及网关路由转发;包含支持各种自定义Label标签的泛化解析处理工具,以及通用的标签匹配评分器。
+
+### 整体架构示意
+
+![整体架构示意图](../../../Images/Architecture/LabelManager/label_manager_global.png)  
+
+#### 架构说明
+- LabelBuilder: 承担着标签解析的工作,从输入的标签类型、关键字或者字符数值中解析得到具体的标签实体,有默认的泛化实现类也可做自定义扩展。
+- LabelEntities: 指代标签实体集合,有且包含集群标签,配置标签,引擎标签,节点标签,路由标签,搜索标签等。
+- NodeLabelService: 实例/节点与标签的关联服务接口类,定义对两者关联关系的增删改查以及根据标签匹配实例/节点的接口方法。
+- UserLabelService: 声明用户与标签的关联操作。
+- ResourceLabelService: 声明集群资源与标签的关联操作,涉及到对组合标签的资源管理,清理或设置标签关联的资源数值。
+- NodeLabelScorer: 节点标签评分器,对应不同的标签匹配算法的实现,使用评分表示节点的标签匹配度。
+
+### 一. LabelBuilder解析流程
+以泛化标签解析类GenericLabelBuilder为例,阐明整体流程:  
+![泛化标签解析流程](../../../Images/Architecture/LabelManager/label_manager_builder.png)  
+标签解析/构建的流程概括包含几步:  
+1. 根据输入选择要构建解析的合适标签类。
+2. 根据标签类的定义信息,递归解析泛型结构,得到具体的标签值类型。
+3. 转化输入值对象到标签值类型,运用隐式转化或正反解析框架。
+4. 根据1-3的返回,实例化标签,并根据不同的标签类进行一些后置操作。
+
+### 二. NodeLabelScorer打分流程
+为了根据Linkis用户执行请求中附带的标签列表挑选合适的引擎节点,需要对符合的引擎列表做择优,量化为引擎节点的标签匹配度即评分。  
+在标签定义里,每个标签都有feature特征值,分别为CORE,SUITABLE,PRIORITIZED,OPTIONAL,每个特征值都有一个boost值,相当于权重和激励值,
+同时有些特征例CORE和SUITABLE为必须唯一特征即在匹配过程中需做强过滤,且一个节点只能分别关联一个CORE/SUITABLE标签。  
+根据现有标签,节点,请求附带标签三者之间的关系,可以绘制出如下示意图:  
+![标签打分](../../../Images/Architecture/LabelManager/label_manager_scorer.png)  
+
+自带的默认评分逻辑过程应大体包含以下几点步骤:  
+1. 方法的输入应该为两组网络关系列表,分别是`Label -> Node` 和 `Node -> Label`, 其中`Node -> Label`关系里的Node节点必须具有请求里涉及到所有CORE以及SUITABLE特征的标签,这些节点也称为备选节点。
+2. 第一步遍历计算`Node -> Label`关系列表,遍历每个节点关联的标签Label,这一步先给标签打分,如果标签不是请求中附带的标签,打分为0,
+否则打分为: (基本分/该标签对应特征值在请求中的出现次数) * 对应特征值的激励值,其中基本分默认为1,节点的初始分为相关联的标签打分的总和;其中因为CORE/SUITABLE类型标签为必须唯一标签,出现次数恒定为1。
+3. 得到节点的初始分后,第二步遍历计算`Label -> Node`关系,由于第一步中忽略了非请求附带标签对评分的作用,但无关标签比重确实会对评分造成影响,对应这类的标签统一打上UNKNOWN的特征,同样该特征也有相对应的激励值;
+我们设定无关标签关联的备选节点占总关联节点的比重越高,对评分的影响越显著,以此可以对第一步得出的节点初始分做进一步累加。
+4. 对得到的备选节点的分数做标准差归一化,并排序。
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md
new file mode 100644
index 0000000..8670a45
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md
@@ -0,0 +1,74 @@
+LinkisManager架构设计
+====================
+
+LinkisManager作为Linkis的一个独立微服务,对外提供了AppManager(应用管理)、ResourceManager(资源管理)、LabelManager(标签管理)的能力,能够支持多活部署,具备高可用、易扩展的特性。
+
+## 一. 架构图
+
+![01](../../../Images/Architecture/LinkisManager/LinkisManager-01.png)
+
+### 名词解释
+- EngineConnManager(ECM): 引擎管理器,用于启动和管理引擎
+- EngineConn(EC):引擎连接器,用于连接底层计算引擎
+- ResourceManager(RM):资源管理器,用于管理节点资源
+
+## 二. 二级模块介绍
+
+### 1. 应用管理模块 linkis-application-manager
+
+AppManager用于引擎的统一调度和管理
+
+| 核心接口/类 | 主要功能 |
+|------------|--------|
+|EMInfoService | 定义了EngineConnManager信息查询、修改功能 |
+|EMRegisterService| 定义了EngineConnManager注册功能 |
+|EMEngineService | 定义了EngineConnManager对EngineConn的创建、查询、关闭功能 |
+|EngineAskEngineService | 定义了查询EngineConn的功能 |
+|EngineConnStatusCallbackService | 定义了处理EngineConn状态回调的功能 |
+|EngineCreateService | 定义了创建EngineConn的功能 |
+|EngineInfoService | 定义了EngineConn查询功能 |
+|EngineKillService | 定义了EngineConn的停止功能 |
+|EngineRecycleService | 定义了EngineConn的回收功能 |
+|EngineReuseService | 定义了EngineConn的复用功能 |
+|EngineStopService | 定义了EngineConn的自毁功能 |
+|EngineSwitchService | 定义了引擎切换功能 |
+|AMHeartbeatService | 提供了EngineConnManager和EngineConn节点心跳处理功能 |
+
+
+通过AppManager申请引擎流程如下:
+![](../../../Images/Architecture/LinkisManager/AppManager-01.png)
+
+  
+### 2. 标签管理模块 linkis-label-manager
+
+LabelManager提供标签管理和解析能力
+
+| 核心接口/类 | 主要功能 |
+|------------|--------|
+|LabelService | 提供了标签增删改查功能 |
+|ResourceLabelService | 提供了资源标签管理功能 |
+|UserLabelService | 提供了用户标签管理功能 |
+
+LabelManager架构图如下:
+![](../../../Images/Architecture/LinkisManager/LabelManager-01.png)
+
+
+
+### 3. 资源管理模块 linkis-resource-manager
+
+ResourceManager用于管理引擎和队列的所有资源分配
+
+| 核心接口/类 | 主要功能 |
+|------------|--------|
+|RequestResourceService | 提供了EngineConn资源申请功能 |
+|ResourceManagerService | 提供了EngineConn资源释放功能 |
+|LabelResourceService | 提供了标签对应资源管理功能 |
+
+
+ResourceManager架构图如下:
+
+![](../../../Images/Architecture/LinkisManager/ResourceManager-01.png)
+
+### 4. 监控模块 linkis-manager-monitor
+
+Monitor提供了节点状态监控的功能
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md
new file mode 100644
index 0000000..1c7bb99
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md
@@ -0,0 +1,145 @@
+ResourceManager(简称RM),是Linkis的计算资源管理模块,所有的EngineConn(简称EC)、EngineConnManager(简称ECM),甚至包括Yarn在内的外部资源,都由RM负责统筹管理。RM能够基于用户、ECM或其它通过复杂标签定义的粒度对资源进行管控。
+
+### RM在Linkis中的作用
+![01](../../../Images/Architecture/rm-01.png)
+![02](../../../Images/Architecture/rm-02.png)
+RM作为Linkis
+Manager的一部分,主要作用为:维护ECM上报的可用资源信息,处理ECM提出的资源申请,记录成功申请后,EC在生命周期内实时上报的实际资源使用信息,并提供查询当前资源使用情况的相关接口。
+
+Linkis中,与RM产生交互的其它服务主要有:
+
+1.  引擎管理器,简称ECM:处理启动引擎连接器请求的微服务。ECM作为资源的提供者,负责向RM注册资源(register)和下线资源(unregister)。同时,ECM作为引擎的管理者,负责代替准备启动的新引擎连接器向RM申请资源。每一个ECM实例,均在RM中有一条对应的资源记录,包含它提供的总资源、保护资源等信息,并动态更新已使用资源。
+![03](../../../Images/Architecture/rm-03.png)
+2.  引擎连接器,简称EC,是用户作业的实际执行单元。同时,EC作为资源的实际使用者,负责向RM上报实际使用资源。每一个EC,均在RM中有一条对应的资源记录:在启动过程中,体现为锁定资源;在运行过程中,体现为已使用资源;在被结束之后,该资源记录随之被删除。
+![04](../../../Images/Architecture/rm-04.png)
+### 资源的类型与格式
+![05](../../../Images/Architecture/rm-05.png)
+如上图所示,所有的资源类均实现一个顶层的Resource接口,该接口定义了所有资源类均需要支持的计算和比较的方法,并进行相应的数学运算符的重载,使得资源之间能够像数字一样直接被计算和比较。
+
+| 运算符 | 对应方法    | 运算符 | 对应方法    |
+|--------|-------------|--------|-------------|
+| \+     | add         | \>     | moreThan    |
+| \-     | minus       | \<     | lessThan    |
+| \*     | multiply    | =      | equals      |
+| /      | divide      | \>=    | notLessThan |
+| \<=    | notMoreThan |        |             |
+
+当前支持的资源类型如下表所示,所有的资源都有对应的json序列化与反序列化方法,能够通过json格式进行存储和在网络间传递:
+
+| 资源类型              | 描述                                                   |
+|-----------------------|--------------------------------------------------------|
+| MemoryResource        | 内存资源                                               |
+| CPUResource           | CPU资源                                                |
+| LoadResource          | 同时具备内存与CPU的资源                                |
+| YarnResource          | Yarn队列资源(队列,队列内存,队列CPU,队列实例数)    |
+| LoadInstanceResource  | 服务器资源(内存,CPU,实例数)                        |
+| DriverAndYarnResource | 驱动器与执行器资源(同时具备服务器资源,Yarn队列资源) |
+| SpecialResource       | 其它自定义资源                                         |
+
+### 可用资源管理
+
+RM中的可用资源,主要有两个来源:ECM上报的可用资源,以及Configuration模块中根据标签配置的资源限制。  
+**ECM资源上报**:
+
+1.  ECM启动时,会广播ECM注册的消息,RM接收到消息后,根据消息中包含的内容进行资源注册,资源相关的内容包括:
+
+    1.  总资源:该ECM能够提供的资源总数。
+
+    2.  保护资源:当剩余资源小于该资源时,不再允许继续分配资源。
+
+    3.  资源类型:如LoadResource,DriverAndYarnResource等类型名称。
+
+    4.  实例信息:机器名加端口名。
+
+2.  RM在收到资源注册请求后,在资源表中新增一条记录,内容与接口的参数信息一致,并通过实例信息找到代表该ECM的标签,在资源、标签关联表中新增一条关联记录。
+
+3.  ECM在关闭时,会广播ECM关闭的消息,RM接收到消息后,根据消息中的ECM实例信息来进行资源的下线,即删除该ECM实例标签对应的资源和关联记录。
+
+**Configuration模块标签资源配置**:
+
+用户能够在Configuration模块中,根据不同的标签组合进行资源数量限制的配置,如限制User/Creator/EngineType组合的最大可用资源。
+
+RM通过RPC消息,以组合标签为查询条件,向Configuration模块查询资源信息,并转换成Resource对象参与后续的比较和记录。
+
+
+### 资源使用管理
+
+**接收用户的资源申请。**
+
+1.  LinkisManager在收到启动EngineConn的请求时,会调用RM的资源申请接口,进行资源申请。资源申请接口接受一个可选的时间参数,当申请资源的等待时间超出该时间参数的限制时,该资源申请将自动作为失败处理。
+
+**判断是否有足够的资源**
+
+即为判断剩余可用资源是否大于申请资源,如果大于或等于,则资源充足;否则资源不充足。
+
+1.  RM预处理资源申请中附带的标签信息,根据规则将原始的标签进行过滤、组合和转换等操作(如将User/Creator标签和EngineType标签进行组合),这使得后续的资源判断的粒度更加灵活多变。
+
+2.  在每个转换后的标签上逐一加锁,使得它们所对应的资源记录在资源申请的处理期间保持不变。
+
+3.  根据每个标签:
+
+    1.  通过Persistence模块从数据库中查询对应的资源记录,如果该记录包含剩余可用资源,则直接用来比较。
+
+    2.  如果没有直接的剩余可用资源记录,则通过[剩余可用资源=最大可用资源-已用资源-已锁定资源-保护资源]公式进行计算得出。
+
+    3.  如果没有最大可用资源记录,则请求Configuration模块,看是否有配置的资源信息,如果有则使用到公式中进行计算,如果没有则跳过针对这个标签的资源判断。
+
+    4.  如果没有任何资源记录,则跳过针对这个标签的资源判断。
+
+4.  只要有一个标签被判断为资源不充足,则资源申请失败,对每个标签逐一解锁。
+
+5.  只有所有标签都判断为资源充足的情况下,才成功通过资源申请,进入下一步。
+
+**锁定申请通过的资源**
+
+1.  根据申请通过的资源数量,在资源表中生成一条新的记录,并与每个标签进行关联。
+
+2.  如果对应的标签有剩余可用资源记录,则扣减对应的数量。
+
+3.  生成一个定时任务,在一定时间后检查这批锁定的资源是否被实际使用,如果超时未使用,则强制回收。
+
+4.  对每个标签进行解锁。
+
+**上报实际使用资源**
+
+1.  EngineConn启动后,广播资源使用消息。RM收到消息后,检查该EngineConn对应的标签是否有锁定资源记录,如果没有,则报错。
+
+2.  如果有锁定资源,则对该EngineConn有关联的所有标签进行加锁。
+
+3.  对每个标签,将对应的锁定资源记录转换为已使用资源记录。
+
+4.  解锁所有标签。
+
+**释放实际使用资源**
+
+1.  EngineConn结束生命周期后,广播资源回收消息。RM收到消息后,检查该EngineConn对应的标签是否有已使用资源记录。
+
+2.  如果有,则对该EngineConn有关联的所有标签进行加锁。
+
+3.  对每个标签,在已使用资源记录中减去对应的数量。
+
+4.  如果对应的标签有剩余可用资源记录,则增加对应的数量。
+
+5.  对每个标签解锁
+
+
+### 外部资源管理
+
+在RM中,为了更加灵活并有拓展性对资源进行分类,支持多集群的资源管控的同时,使得接入新的外部资源更加便利,在设计上进行了以下几点的考虑:
+
+1.  通过标签来对资源进行统一管理。资源注册后,与标签进行关联,使得资源的属性能够无限拓展。同时,资源申请也都带上标签,实现灵活的匹配。
+
+2.  将集群抽象成一个或多个标签,并在外部资源管理模块中维护每个集群标签对应的环境信息,实现动态的对接。
+
+3.  抽象出通用的外部资源管理模块,如需接入新的外部资源类型,只要实现固定的接口,即可将不同类型的资源信息转换为RM中的Resource实体,实现统一管理。
+![06](../../../Images/Architecture/rm-06.png)
+RM的其它模块,通过ExternalResourceService提供的接口来进行外部资源信息的获取。
+
+而ExternalResourceService通过资源类型和标签来获取外部资源的信息:
+
+1.  所有外部资源的类型、标签、配置等属性(如集群名称、Yarn的web
+    url、Hadoop版本等信息),都维护在linkis\_external\_resource\_provider表中。
+
+2.  针对每种资源类型,均有一个ExternalResourceProviderParser接口的实现,将外部资源的属性进行解析,将能够匹配到Label的信息转换成对应的Label,将能够作为参数去请求资源接口的都转换成params。最后构建成一个能够作为外部资源信息查询依据的ExternalResourceProvider实例。
+
+3.  根据ExternalResourceService方法的参数中的资源类型和标签信息,找到匹配的ExternalResourceProvider,根据其中的信息生成ExternalResourceRequest,正式调用外部资源提供的API,发起资源信息请求。
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/README.md
new file mode 100644
index 0000000..76ab242
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/README.md
@@ -0,0 +1,66 @@
+## **背景**
+
+**Linkis0.X的架构主要存在以下问题**
+
+1.核心处理流程和层级模块边界模糊
+
+-   Entrance 和 EngineManager 功能边界模糊
+
+-   任务提交执行主流程不够清晰
+
+-   扩展新引擎较麻烦,需要实现多个模块的代码
+
+-   只支持计算请求场景,存储请求场景和常驻服务模式(Cluster)难以支持
+
+2.更丰富强大计算治理功能需求
+
+-   计算任务管理策略支持度不够
+
+-   标签能力不够强大,制约计算策略和资源管理
+
+Linkis1.0计算治理服务的新架构可以很好的解决这些问题。
+
+## **架构图**
+![](../../Images/Architecture/linkis-computation-gov-01.png)
+
+**作业流程优化:**
+Linkis1.0将优化Job的整体执行流程,从提交 —\> 准备 —\>
+执行三个阶段,来全面升级Linkis的Job执行架构,如下图所示:
+
+![](../../Images/Architecture/linkis-computation-gov-02.png)
+
+## **架构说明**
+
+### 1、Entrance
+
+ Entrance作为计算类型任务的提交入口,提供任务的接收、调度和Job信息的转发能力,是从Linkis0.X的Entrance拆分出来的原生能力;
+ 
+ [进入Entrance架构设计](./Entrance/Entrance.md)
+
+### 2、Orchestrator
+
+ Orchestrator 作为准备阶段的入口,从 Linkis0.X 的 Entrance 继承了解析Job、申请Engine和提交执行的能力;同时,Orchestrator将提供强大的编排和计算策略能力,满足多活、主备、事务、重放、限流、异构和混算等多种应用场景的需求。
+ 
+ [进入Orchestrator架构设计](../Orchestrator/README.md)
+
+### 3、LinkisManager
+
+ LinkisManager作为Linkis的管理大脑,主要由AppManager、ResourceManager、LabelManager和EngineConnPlugin组成。
+ 
+ 1. ResourceManager 不仅具备 Linkis0.X 对 Yarn 和 Linkis EngineManager的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让ResourceManager具备跨集群、跨计算资源类型的全资源管理能力;
+ 2. AppManager 将统筹管理所有的 EngineConnManager 和 EngineConn,EngineConn 的申请、复用、创建、切换、销毁等生命周期全交予 AppManager 进行管理;而 LabelManager 将基于多级组合标签,提供跨IDC、跨集群的 EngineConn 和 EngineConnManager 路由和管控能力;
+ 3. EngineConnPlugin 主要用于降低新计算存储的接入成本,真正做到让用户只需要实现一个类,就能接入一个全新的计算存储引擎。
+
+ [进入LinkisManager架构设计](./LinkisManager/README.md)
+
+### 4、EngineConnManager
+
+ EngineConnManager (简称ECM)是 Linkis0.X EngineManager 的精简升级版。Linkis1.0下的ECM去除了引擎的申请能力,整个微服务完全无状态,将聚焦于支持各类 EngineConn 的启动和销毁。
+ 
+ [进入EngineConnManager架构设计](./EngineConnManager/README.md)
+
+### 5、EngineConn
+
+EngineConn 是 Linkis0.X Engine 的优化升级版本,将提供 EngineConn 和 Executor 两大模块,其中 EngineConn 用于连接底层的计算存储引擎,提供一个打通了底层各计算存储引擎的 Session 会话;Executor 则基于这个 Session 会话,提供交互式计算、流式计算、离线计算、数据存储的全栈计算能力支持。
+
+[进入EngineConn架构设计](./EngineConn/README.md)
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/EngineConn\346\226\260\345\242\236\346\265\201\347\250\213.md" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/EngineConn\346\226\260\345\242\236\346\265\201\347\250\213.md"
new file mode 100644
index 0000000..7be886a
--- /dev/null
+++ "b/Linkis-Doc-master/zh_CN/Architecture_Documents/EngineConn\346\226\260\345\242\236\346\265\201\347\250\213.md"
@@ -0,0 +1,111 @@
+# EngineConn新增流程
+
+EngineConn的新增,是Linkis计算治理的计算任务准备阶段的核心流程之一。它主要包括了Client端(Entrance或用户客户端)向LinkisManager发起一个新增EngineConn的请求,LinkisManager为用户按需、按标签规则,向EngineConnManager发起一个启动EngineConn的请求,并等待EngineConn启动完成后,将可用的EngineConn返回给Client的整个流程。
+
+如下图所示,接下来我们来详细说明一下整个流程:
+
+![EngineConn新增流程](../Images/Architecture/EngineConn新增流程/EngineConn新增流程.png)
+
+## 一、LinkisManager接收客户端请求
+
+**名词解释**:
+
+- LinkisManager:是Linkis计算治理能力的管理中枢,主要的职责为:
+  1. 基于多级组合标签,为用户提供经过复杂路由、资源管控和负载均衡后的可用EngineConn;
+  
+  2. 提供EC和ECM的全生命周期管理能力;
+  
+  3. 为用户提供基于多级组合标签的多Yarn集群资源管理功能。主要分为 AppManager(应用管理器)、ResourceManager(资源管理器)、LabelManager(标签管理器)三大模块,能够支持多活部署,具备高可用、易扩展的特性。
+
+&nbsp;&nbsp;&nbsp;&nbsp;AM模块接收到Client的新增EngineConn请求后,首先会对请求做参数校验,判断请求参数的合法性;其次是通过复杂规则选中一台最合适的EngineConnManager(ECM),以用于后面的EngineConn启动;接下来会向RM申请启动该EngineConn需要的资源;最后是向ECM请求创建EngineConn。
+
+下面将对四个步骤进行详细说明。
+
+### 1. 请求参数校验
+
+&nbsp;&nbsp;&nbsp;&nbsp;AM模块在接受到引擎创建请求后首先会做参数判断,首先会做请求用户和创建用户的权限判断,接着会对请求带上的Label进行检查。因为在AM后续的创建流程当中,Label会用来查找ECM和进行资源信息记录等,所以需要保证拥有必须的Label,现阶段一定需要带上的Label有UserCreatorLabel(例:hadoop-IDE)和EngineTypeLabel(例:spark-2.4.3)。
+
+### 2. EngineConnManager(ECM)选择
+
+&nbsp;&nbsp;&nbsp;&nbsp;ECM选择主要是完成通过客户端传递过来的Label去选择一个合适的ECM服务去启动EngineConn。这一步中首先会通过LabelManager去通过客户端传递过来的Label去注册的ECM中进行查找,通过按照标签匹配度进行顺序返回。在获取到注册的ECM列表后,会对这些ECM进行规则选择,现阶段已经实现有可用性检查、资源剩余、机器负载等规则。通过规则选择后,会将标签最匹配、资源最空闲、负载低的ECM进行返回。
+
+### 3. EngineConn资源申请
+
+1. 在获取到分配的ECM后,AM接着会通过调用EngineConnPluginServer服务请求本次客户端的引擎创建请求会使用多少的资源,这里会通过封装资源请求,主要包含Label、Client传递过来的EngineConn的启动参数、以及从Configuration模块获取到用户配置参数,通过RPC调用ECP服务去获取本次的资源信息。
+
+2. EngineConnPluginServer服务在接收到资源请求后,会先通过传递过来的标签找到对应的引擎标签,通过引擎标签选择对应引擎的EngineConnPlugin。然后通过EngineConnPlugin的资源生成器,对客户端传入的引擎启动参数进行计算,算出本次申请新EngineConn所需的资源,然后返回给LinkisManager。
+   
+   **名词解释:**
+- EgineConnPlugin:是Linkis对接一个新的计算存储引擎必须要实现的接口,该接口主要包含了这种EngineConn在启动过程中必须提供的几个接口能力,包括EngineConn资源生成器、EngineConn启动命令生成器、EngineConn引擎连接器。具体的实现可以参考Spark引擎的实现类:[SparkEngineConnPlugin](https://github.com/WeBankFinTech/Linkis/blob/master/linkis-engineconn-plugins/engineconn-plugins/spark/src/main/scala/com/webank/wedatasphere/linkis/engineplugin/spark/SparkEngineConnPlugin.scala)。
+
+- EngineConnPluginServer:是加载了所有的EngineConnPlugin,对外提供EngineConn的所需资源生成能力和EngineConn的启动命令生成能力的微服务。
+
+- EngineConnPlugin资源生成器(EngineConnResourceFactory):通过传入的参数,计算出本次EngineConn启动时需要的总资源。
+
+- EngineConn启动命令生成器(EngineConnLaunchBuilder):通过传入的参数,生成该EngineConn的启动命令,以提供给ECM去启动引擎。
+3. AM在获取到引擎资源后,会接着调用RM服务去申请资源,RM服务会通过传入的Label、ECM、本次申请的资源,去进行资源判断。首先会判断客户端对应Label的资源是否足够,然后再会判断ECM服务的资源是否足够,如果资源足够,则本次资源申请通过,并对对应的Label进行资源的加减。
+
+### 4. 请求ECM创建引擎
+
+1. 在完成引擎的资源申请后,AM会封装引擎启动的请求,通过RPC发送给对应的ECM进行服务启动,并获取到EngineConn的实例对象;
+2. AM接着会去通过EngineConn的上报信息判断EngineConn是否启动成功变成可用状态,如果是就会将结果进行返回,本次新增引擎的流程也就结束。
+
+## 二、 ECM启动EngineConn
+
+名词解释:
+
+- EngineConnManager(ECM):EngineConn的管理器,提供引擎的生命周期管理,同时向RM汇报负载信息和自身的健康状况。
+
+- EngineConnBuildRequest:LinkisManager传递给ECM的启动引擎命令,里面封装了该引擎的所有标签信息、所需资源和一些参数配置信息。
+
+- EngineConnLaunchRequest:包含了启动一个EngineConn所需的BML物料、环境变量、ECM本地必需环境变量、启动命令等信息,让ECM可以依此构建出一个完整的EngineConn启动脚本。
+
+ECM接收到LinkisManager传递过来的EngineConnBuildRequest命令后,主要分为三步来启动EngineConn:1. 请求EngineConnPluginServer,获取EngineConnPluginServer封装出的EngineConnLaunchRequest;2. 解析EngineConnLaunchRequest,封装成EngineConn启动脚本;3. 执行启动脚本,启动EngineConn。
+
+### 2.1 EngineConnPluginServer封装EngineConnLaunchRequest
+
+通过EngineConnBuildRequest的标签信息,拿到实际需要启动的EngineConn类型和对应版本,从EngineConnPluginServer的内存中获取到该EngineConn类型的EngineConnPlugin,通过该EngineConnPlugin的EngineConnLaunchBuilder,将EngineConnBuildRequest转换成EngineConnLaunchRequest。
+
+### 2.2 封装EngineConn启动脚本
+
+ECM获取到EngineConnLaunchRequest之后,将EngineConnLaunchRequest中的BML物料下载到本地,并检查EngineConnLaunchRequest要求的本地必需环境变量是否存在,校验通过后,将EngineConnLaunchRequest封装成一个EngineConn启动脚本
+
+### 2.3 执行启动脚本
+
+目前ECM只对Unix系统做了Bash命令的支持,即只支持Linux系统执行该启动脚本。
+
+启动前,会通过sudo命令,切换到对应的请求用户去执行该脚本,确保启动用户(即JVM用户)为Client端的请求用户。
+
+执行该启动脚本后,ECM会实时监听脚本的执行状态和执行日志,一旦执行状态返回非0,则立马向LinkisManager汇报EngineConn启动失败,整个流程完成;否则则一直监听启动脚本的日志和状态,直到该脚本执行完成。
+
+## 三、EngineConn初始化
+
+ECM执行了EngineConn的启动脚本后,EngineConn微服务正式启动。
+
+名词解释:
+
+- EngineConn微服务:指包含了一个EngineConn、一个或多个Executor,用于对计算任务提供计算能力的实际微服务。我们说的新增一个EngineConn,其实指的就是新增一个EngineConn微服务。
+
+- EngineConn:引擎连接器,是与底层计算存储引擎的实际连接单元,包含了与实际引擎的会话信息。它与Executor的差别,是EngineConn只是起到一个连接、一个客户端的作用,并不真正的去执行计算。如SparkEngineConn,其会话信息为SparkSession。
+
+- Executor:执行器,作为真正的计算存储场景执行器,是实际的计算存储逻辑执行单元,对EngineConn各种能力的具体抽象,提供交互式执行、订阅式执行、响应式执行等多种不同的架构能力。
+
+EngineConn微服务的初始化一般分为三个阶段:
+
+1. 初始化具体引擎的EngineConn。先通过Java main方法的命令行参数,封装出一个包含了相关标签信息、启动信息和参数信息的EngineCreationContext,通过EngineCreationContext初始化EngineConn,完成EngineConn与底层Engine的连接建立,如:SparkEngineConn会在该阶段初始化一个SparkSession,用于与一个Spark application建立了连通关系。
+
+2. 初始化Executor。EngineConn初始化之后,接下来会根据实际的使用场景,初始化对应的Executor,为接下来的用户使用,提供服务能力。比如:交互式计算场景的SparkEngineConn,会初始化一系列可以用于提交执行SQL、PySpark、Scala代码能力的Executor,支持Client往该SparkEngineConn提交执行SQL、PySpark、Scala等代码。
+
+3. 定时向LinkisManager汇报心跳,并等待EngineConn结束退出。当EngineConn对应的底层引擎异常、或是超过最大空闲时间、或是Executor执行完成、或是用户手动kill时,该EngineConn自动结束退出。
+
+----
+
+到了这里,EngineConn的新增流程就基本结束了,最后我们再来总结一下EngineConn的新增流程:
+
+- 客户端向LinkisManager发起新增EngineConn的请求;
+
+- LinkisManager校验参数合法性,先是根据标签选择合适的ECM,再根据用户请求确认本次新增EngineConn所需的资源,向LinkisManager的RM模块申请资源,申请通过后要求ECM按要求启动一个新的EngineConn;
+
+- ECM先请求EngineConnPluginServer获取一个包含了启动一个EngineConn所需的BML物料、环境变量、ECM本地必需环境变量、启动命令等信息的EngineConnLaunchRequest,然后封装出EngineConn的启动脚本,最后执行启动脚本,启动该EngineConn;
+
+- EngineConn初始化具体引擎的EngineConn,然后根据实际的使用场景,初始化对应的Executor,为接下来的用户使用,提供服务能力。最后定时向LinkisManager汇报心跳,等待正常结束或被用户终止。
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Job\346\217\220\344\272\244\345\207\206\345\244\207\346\211\247\350\241\214\346\265\201\347\250\213.md" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Job\346\217\220\344\272\244\345\207\206\345\244\207\346\211\247\350\241\214\346\265\201\347\250\213.md"
new file mode 100644
index 0000000..a166df4
--- /dev/null
+++ "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Job\346\217\220\344\272\244\345\207\206\345\244\207\346\211\247\350\241\214\346\265\201\347\250\213.md"
@@ -0,0 +1,165 @@
+# Job提交准备执行流程
+
+计算任务(Job)的提交执行是Linkis提供的核心能力,它几乎串通了Linkis计算治理架构中的所有模块,在Linkis之中占据核心地位。
+
+我们将用户的计算任务从客户端提交开始,到最后的返回结果为止,整个流程分为三个阶段:提交 -> 准备 -> 执行,如下图所示:
+
+![计算任务整体流程图](../Images/Architecture/Job提交准备执行流程/计算任务整体流程图.png)
+
+其中:
+
+- Entrance作为提交阶段的入口,提供任务的接收、调度和Job信息的转发能力,是所有计算型任务的统一入口,它将把计算任务转发给Orchestrator进行编排和执行;
+
+- Orchestrator作为准备阶段的入口,主要提供了Job的解析、编排和执行能力。。
+
+- Linkis Manager:是计算治理能力的管理中枢,主要的职责为:
+  
+  1. ResourceManager:不仅具备对Yarn和Linkis EngineConnManager的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让ResourceManager具备跨集群、跨计算资源类型的全资源管理能力;
+  
+  2. AppManager:统筹管理所有的EngineConnManager和EngineConn,包括EngineConn的申请、复用、创建、切换、销毁等生命周期全交予AppManager进行管理;
+  
+  3. LabelManager:将基于多级组合标签,为跨IDC、跨集群的EngineConn和EngineConnManager路由和管控能力提供标签支持;
+  
+  4. EngineConnPluginServer:对外提供启动一个EngineConn的所需资源生成能力和EngineConn的启动命令生成能力。
+
+- EngineConnManager:是EngineConn的管理器,提供引擎的生命周期管理,同时向RM汇报负载信息和自身的健康状况。
+
+- EngineConn:是Linkis与底层计算存储引擎的实际连接器,用户所有的计算存储任务最终都会交由EngineConn提交给底层计算存储引擎。根据用户的不同使用场景,EngineConn提供了交互式计算、流式计算、离线计算、数据存储任务的全栈计算能力框架支持。
+
+接下来,我们将详细介绍计算任务从 提交 -> 准备 -> 执行 的三个阶段。
+
+## 一、提交阶段
+
+提交阶段主要是Client端 -> Linkis Gateway -> Entrance的交互,其流程如下:
+
+![提交阶段流程图](../Images/Architecture/Job提交准备执行流程/提交阶段流程图.png)
+
+1. 首先,Client(如前端或客户端)发起Job请求,Job请求信息精简如下(关于Linkis的具体使用方式,请参考 [如何使用Linkis](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/User_Manual/How_To_Use_Linkis.md)):
+
+```
+POST /api/rest_j/v1/entrance/submit
+```
+
+```json
+{
+    "executionContent": {"code": "show tables", "runType": "sql"},
+    "params": {"variable": {}, "configuration": {}},  //非必须
+    "source": {"scriptPath": "file:///1.hql"}, //非必须,仅用于记录代码来源
+    "labels": {
+        "engineType": "spark-2.4.3",  //指定引擎
+        "userCreator": "johnnwnag-IDE"  // 指定提交用户和提交系统
+    }
+}
+```
+
+2. Linkis-Gateway接收到请求后,根据URI ``/api/rest_j/v1/${serviceName}/.+``中的serviceName,确认路由转发的微服务名,这里Linkis-Gateway会解析出微服务名为entrance,将Job请求转发给Entrance微服务。需要说明的是:如果用户指定了路由标签,则在转发时,会根据路由标签选择打了相应标签的Entrance微服务实例进行转发,而不是随机转发。
+
+3. Entrance接收到Job请求后,会先简单校验请求的合法性,然后通过RPC调用JobHistory对Job的信息进行持久化,然后将Job请求封装为一个计算任务,放入到调度队列之中,等待被消费线程消费。
+
+4. 调度队列会为每个组开辟一个消费队列 和 一个消费线程,消费队列用于存放已经初步封装的用户计算任务,消费线程则按照FIFO的方式,不断从消费队列中取出计算任务进行消费。目前默认的分组方式为 Creator + User(即提交系统 + 用户),因此,即便是同一个用户,只要是不同的系统提交的计算任务,其实际的消费队列和消费线程都完全不同,完全隔离互不影响。(温馨提示:用户可以按需修改分组算法)
+
+5. 消费线程取出计算任务后,会将计算任务提交给Orchestrator,由此正式进入准备阶段。
+
+## 二、 准备阶段
+
+准备阶段主要有两个流程,一是向LinkisManager申请一个可用的EngineConn,用于接下来的计算任务提交执行,二是Orchestrator对Entrance提交过来的计算任务进行编排,将一个用户计算请求,通过编排转换成一个物理执行树,然后交给第三阶段的执行阶段去真正提交执行。
+
+#### 2.1 向LinkisManager申请可用EngineConn
+
+如果在LinkisManager中,该用户存在可复用的EngineConn,则直接锁定该EngineConn,并返回给Orchestrator,整个申请流程结束。
+
+如何定义可复用EngineConn?指能匹配计算任务的所有标签要求的,且EngineConn本身健康状态为Healthy(负载低且实际EngineConn状态为Idle)的,然后再按规则对所有满足条件的EngineConn进行排序选择,最终锁定一个最佳的EngineConn。
+
+如果该用户不存在可复用的EngineConn,则此时会触发EngineConn新增流程,关于EngineConn新增流程,请参数:[EngineConn新增流程](EngineConn新增流程.md) 。
+
+#### 2.2 计算任务编排
+
+Orchestrator主要负责将一个计算任务(JobReq),编排成一棵可以真正执行的物理执行树(PhysicalTree),并提供Physical树的执行能力。
+
+这里先重点介绍Orchestrator的计算任务编排能力,如下图:
+
+![编排流程图](../Images/Architecture/Job提交准备执行流程/编排流程图.png)
+
+其主要流程如下:
+
+- Converter(转换):完成对用户提交的JobReq(任务请求)转换为Orchestrator的ASTJob,该步骤会对用户提交的计算任务进行参数检查和信息补充,如变量替换等;
+
+- Parser(解析):完成对ASTJob的解析,将ASTJob拆成由ASTJob和ASTStage组成的一棵AST树。
+
+- Validator(校验): 完成对ASTJob和ASTStage的检验和信息补充,如代码检查、必须的Label信息补充等。
+
+- Planner(计划):将一棵AST树转换为一棵Logical树。此时的Logical树已经由LogicalTask组成,包含了整个计算任务的所有执行逻辑。
+
+- Optimizer(优化阶段):将一棵Logical树转换为Physica树,并对Physical树进行优化。
+
+一棵Physical树,其中的很多节点都是计算策略逻辑,只有中间的ExecTask,才真正封装了将用户计算任务提交给EngineConn进行提交执行的执行逻辑。如下图所示:
+
+![Physical树](../Images/Architecture/Job提交准备执行流程/Physical树.png)
+
+不同的计算策略,其Physical树中的JobExecTask 和 StageExecTask所封装的执行逻辑各不相同。
+
+如多活计算策略下,用户提交的一个计算任务,其提交给不同集群的EngineConn进行执行的执行逻辑封装在了两个ExecTask中,而相关的多活策略逻辑则体现在了两个ExecTask的父节点StageExecTask(End)之中。
+
+这里举多活计算策略下的多读场景。
+
+多读时,实际只要求一个ExecTask返回结果,该Physical树就可以标记为执行成功并返回结果了,但Physical树只具备按依赖关系进行依次执行的能力,无法终止某个节点的执行,且一旦某个节点被取消执行或执行失败,则整个Physical树其实会被标记为执行失败,这时就需要StageExecTask(End)来做一些特殊的处理,来保证既可以取消另一个ExecTask,又能把执行成功的ExecTask所产生的结果集继续往上传,让Physical树继续往上执行。这就是StageExecTask所代表的计算策略执行逻辑。
+
+Linkis Orchestrator的编排流程与很多SQL解析引擎(如Spark、Hive的SQL解析器)存在相似的地方,但实际上,Linkis Orchestrator是面向计算治理领域针对用户不同的计算治理需求,而实现的解析编排能力,而SQL解析引擎是面向SQL语言的解析编排。这里做一下简单区分:
+
+1. Linkis Orchestrator主要想解决的,是不同计算任务对计算策略所引发出的编排需求。如:用户想具备多活的能力,则Orchestrator会为用户提交的一个计算任务,基于“多活”的计算策略需求,编排出一棵Physical树,从而做到往多个集群去提交执行这个计算任务,并且在构建整个Physical树的过程中,已经充分考虑了各种可能存在的异常场景,并都已经体现在了Physical树中。
+
+2. Linkis Orchestrator的编排能力与编程语言无关,理论上只要是Linkis已经对接的引擎,其支持的所有编程语言都支持编排;而SQL解析引擎只关心SQL的解析和执行,只负责将一条SQL解析成一颗可执行的Physical树,最终计算出结果。
+
+3. Linkis Orchestrator也具备对SQL的解析能力,但SQL解析只是Orchestrator Parser针对SQL这种编程语言的其中一种解析实现。Linkis Orchestrator的Parser也考虑引入Apache Calcite对SQL进行解析,支持将一条跨多个计算引擎(必须是Linkis已经对接的计算引擎)的用户SQL,拆分成多条子SQL,在执行阶段时分别提交给对应的计算引擎进行执行,最后选择一个合适的计算引擎进行汇总计算。
+
+关于Orchestrator的编排详细介绍,请参考:[Orchestrator架构设计](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_architecture_doc.md)
+
+经过了Linkis Orchestrator的解析编排后,用户的计算任务已经转换成了一颗可被执行的Physical树。Orchestrator会将该Physical树提交给Orchestrator的Execution模块,进入最后的执行阶段。
+
+## 三、执行阶段
+
+执行阶段主要分为如下两步,这两步是Linkis Orchestrator提供的最后两阶段的能力:
+
+![执行阶段流程图](../Images/Architecture/Job提交准备执行流程/执行阶段流程图.png)
+
+其主要流程如下:
+
+- Execution(执行):解析Physical树的依赖关系,按照依赖从叶子节点开始依次执行。
+
+- Reheater(再热):一旦Physical树有节点执行完成,都会触发一次再热。再热允许依照Physical树的实时执行情况,动态调整Physical树,继续进行执行。如:检测到某个叶子节点执行失败,且该叶子节点支持重试(如失败原因是抛出了ReTryExecption),则自动调整Physical树,在该叶子节点上面添加一个内容完全相同的重试父节点。
+
+我们回到Execution阶段,这里重点介绍封装了将用户计算任务提交给EngineConn的ExecTask节点的执行逻辑。
+
+1. 前面有提到,准备阶段的第一步,就是向LinkisManager获取一个可用的EngineConn,ExecTask拿到这个EngineConn后,会通过RPC请求,将用户的计算任务提交给EngineConn。
+
+2. EngineConn接收到计算任务之后,会通过线程池异步提交给底层的计算存储引擎,然后马上返回一个执行ID。
+
+3. ExecTask拿到这个执行ID后,后续可以通过该执行ID异步去拉取计算任务的执行情况(如:状态、进度、日志、结果集等)。
+
+4. 同时,EngineConn会通过注册的多个Listener,实时监听底层计算存储引擎的执行情况。如果该计算存储引擎不支持注册Listener,则EngineConn会为计算任务启动守护线程,定时向计算存储引擎拉取执行情况。
+
+5. EngineConn将拉取到的执行情况,通过RCP请求,实时传回Orchestrator所在的微服务。
+
+6. 该微服务的Receiver接收到执行情况后,会通过ListenerBus进行广播,Orchestrator的Execution消费该事件并动态更新Physical树的执行情况。
+
+7. 计算任务所产生的结果集,会在EngineConn端就写入到HDFS等存储介质之中。EngineConn通过RPC传回的只是结果集路径,Execution消费事件,并将获取到的结果集路径通过ListenerBus进行广播,使Entrance向Orchestrator注册的Listener能消费到该结果集路径,并将结果集路径写入持久化到JobHistory之中。
+
+8. EngineConn端的计算任务执行完成后,通过同样的逻辑,会触发Execution更新Physical树该ExecTask节点的状态,使得Physical树继续往上执行,直到整棵树全部执行完成。这时Execution会通过ListenerBus广播计算任务执行完成的状态。
+
+9. Entrance向Orchestrator注册的Listener消费到该状态事件后,向JobHistory更新Job的状态,整个任务执行完成。
+
+----
+
+最后,我们再来看下Client端是如何得知计算任务状态变化,并及时获取到计算结果的,具体如下图所示:
+
+![结果获取流程](../Images/Architecture/Job提交准备执行流程/结果获取流程.png)
+
+具体流程如下:
+
+1. Client端定时轮询请求Entrance,获取计算任务的状态。
+
+2. 一旦发现状态翻转为成功,则向JobHistory发送获取Job信息的请求,拿到所有的结果集路径
+
+3. 通过结果集路径向PublicService发起查询文件内容的请求,获取到结果集的内容。
+
+自此,整个Job的提交 -> 准备 -> 执行 三个阶段全部完成。
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Linkis1.0\344\270\216Linkis0.X\347\232\204\345\214\272\345\210\253\347\256\200\350\277\260.md" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Linkis1.0\344\270\216Linkis0.X\347\232\204\345\214\272\345\210\253\347\256\200\350\277\260.md"
new file mode 100644
index 0000000..78d2d9d
--- /dev/null
+++ "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Linkis1.0\344\270\216Linkis0.X\347\232\204\345\214\272\345\210\253\347\256\200\350\277\260.md"
@@ -0,0 +1,98 @@
+## 1. 简述
+
+&nbsp;&nbsp;&nbsp;&nbsp;  首先,Linkis1.0 架构下的 Entrance 和 EngineConnManager(原EngineManager)服务与 **引擎** 已完全无关,即:
+                             在 Linkis1.0 架构下,每个引擎无需再配套实现并启动对应的 Entrance 和 EngineConnManager,Linkis1.0 的每个 Entrance 和 EngineConnManager 都可以给所有引擎共用。
+                          
+&nbsp;&nbsp;&nbsp;&nbsp;  其次,Linkis1.0 新增了Linkis-Manager服务用于对外提供 AppManager(应用管理)、ResourceManager(资源管理,原ResourceManager服务)和 LabelManager(标签管理)的能力。
+
+&nbsp;&nbsp;&nbsp;&nbsp;  然后,为了降低大家实现和部署一个新引擎的难度,Linkis 1.0 重新架构了一个叫 EngineConnPlugin 的模块,每个新引擎只需要实现 EngineConnPlugin 接口即可,
+Linkis EngineConnPluginServer 支持以插件的形式动态加载 EngineConnPlugin(新引擎),一旦 EngineConnPluginServer 加载成功,EngineConnManager 便可为用户快速启动一个该引擎实例。
+                          
+&nbsp;&nbsp;&nbsp;&nbsp;  最后,对Linkis的所有微服务进行了归纳分类,总体分为了三个大层次:公共增强服务、计算治理服务和微服务治理服务,从代码层级结构、微服务命名和安装目录结构等多个方面来规范Linkis1.0的微服务体系。
+
+
+##  2. 主要特点
+
+1.  **强化计算治理**,Linkis1.0主要从引擎管理、标签管理、ECM管理和资源管理等几个方面,全面强化了计算治理的综合管控能力,基于标签化的强大管控设计理念,使得Linkis1.0向多IDC化、多集群化、多容器化,迈出了坚实的一大步。
+
+2.  **简化用户实现新引擎**,EnginePlugin用于将原本实现一个新引擎,需要实现的相关接口和类,以及需要拆分的Entrance-EngineManager-Engine三层模块体系,融合到了一个接口之中,简化用户实现新引擎的流程和代码,真正做到只要实现一个类,就能接入一个新引擎。
+
+3.  **全栈计算存储引擎支持**,实现对计算请求场景(如Spark)、存储请求场景(如HBase)和常驻集群型服务(如SparkStreaming)的全面覆盖支持。
+
+4.  **高级计算策略能力改进**,新增Orchestrator实现丰富计算任务管理策略,且支持基于标签的解析和编排。
+
+5.  **安装部署改进**  优化一键安装脚本,支持容器化部署,简化用户配置。
+
+## 3. 服务对比
+
+&nbsp;&nbsp;&nbsp;&nbsp;  请参考以下两张图:
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Linkis0.X 微服务列表如下:
+
+![Linkis0.X服务列表](./../../en_US/Images/Architecture/Linkis0.X-services-list.png)
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Linkis1.0 微服务列表如下:
+
+![Linkis1.0服务列表](./../../en_US/Images/Architecture/Linkis1.0-services-list.png)
+
+&nbsp;&nbsp;&nbsp;&nbsp;  从上面两个图中看,Linkis1.0 将服务分为了三类服务:计算治理(英文缩写CG)/微服务治理(MG)/公共增强服务(PS)。其中:
+
+1. 计算治理的一大变化是,Entrance 和 EngineConnManager服务与引擎再不相关,实现一个新引擎只需实现 EngineConnPlugin插件即可,EngineConnPluginServer会动态加载 EngineConnPlugin 插件,做到引擎热插拔式更新;
+
+2. 计算治理的另一大变化是,LinkisManager作为 Linkis 的管理大脑,抽象和定义了 AppManager(应用管理)、ResourceManager(资源管理)和LabelManager(标签管理);
+
+3. 微服务治理服务,将0.X部分的Eureka和Gateway服务进行了归并统一,并对Gateway服务进行了功能增强,支持按照Label进行路由转发;
+
+4. 公共增强服务,主要将0.X部分的BML服务/上下文服务/数据源服务/公共服务进行了优化和归并统一,便于大家管理和查看。
+
+## 4. Linkis Manager简介
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Linkis Manager 作为 Linkis 的管理大脑,主要由 AppManager、ResourceManager 和 LabelManager 组成。
+
+&nbsp;&nbsp;&nbsp;&nbsp;  ResourceManager 不仅具备 Linkis0.X 对 Yarn 和 Linkis EngineManager 的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让 ResourceManager 具备跨集群、跨计算资源类型的全资源管理能力;
+
+&nbsp;&nbsp;&nbsp;&nbsp;  AppManager 将统筹管理所有的 EngineConnManager 和 EngineConn,EngineConn 的申请、复用、创建、切换、销毁等生命周期全交予 AppManager进行管理;
+
+&nbsp;&nbsp;&nbsp;&nbsp;  而 LabelManager 将基于多级组合标签,提供跨IDC、跨集群的 EngineConn 和 EngineConnManager 路由和管控能力;
+
+## 5. Linkis EngineConnPlugin简介
+
+&nbsp;&nbsp;&nbsp;&nbsp;  EngineConnPlugin 主要用于降低新计算存储的接入和部署成本,真正做到让用户“只需实现一个类,就能接入一个全新计算存储引擎;只需执行一下脚本,即可快速部署一个全新引擎”。
+
+### 5.1 新引擎实现对比
+
+&nbsp;&nbsp;&nbsp;&nbsp;  以下是用户Linkis0.X实现一个新引擎需要实现的相关接口和类:
+
+![Linkis0.X 如何实现一个全新引擎](./../../en_US/Images/Architecture/Linkis0.X-NewEngine-architecture.png)
+
+&nbsp;&nbsp;&nbsp;&nbsp;  以下为Linkis1.0.0,实现一个新引擎,用户需实现的接口和类:
+
+![Linkis1.0 如何实现一个全新引擎](./../../en_US/Images/Architecture/Linkis1.0-NewEngine-architecture.png)
+
+&nbsp;&nbsp;&nbsp;&nbsp;  其中EngineConnResourceFactory和EngineLaunchBuilder为非必需实现接口,只有EngineConnFactory为必需实现接口。
+
+### 5.2 新引擎启动流程
+
+&nbsp;&nbsp;&nbsp;&nbsp;  EngineConnPlugin 提供了 Server 服务,用于启动和加载所有的引擎插件,以下给出了一个新引擎启动,访问了 EngineConnPlugin-Server 的全部流程:
+
+![Linkis 引擎启动流程](./../../en_US/Images/Architecture/Linkis1.0-newEngine-initialization.png)
+
+## 6. Linkis EngineConn简介
+
+&nbsp;&nbsp;&nbsp;&nbsp;  EngineConn,即原 Engine 模块,作为 Linkis 与底层计算存储引擎进行连接和交互的实际单元,是 Linkis 提供计算存储能力的基础。
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Linkis1.0 的 EngineConn 主要由 EngineConn 和 Executor构成。其中:
+
+a)	EngineConn 为连接器,包含引擎与具体集群的会话信息。它只是起到一个连接,一个客户端的作用,并不真正的去执行计算。
+
+b)	Executor 为执行器,作为真正的计算场景执行器,是实际的计算逻辑执行单元,也对引擎各种具体能力的抽象,例如提供加锁、访问状态、获取日志等多种不同的服务。
+
+c)	Executor 通过 EngineConn 中的会话信息进行创建,一个引擎类型可以支持多种不同种类的计算任务,每种对应一个 Executor 的实现,计算任务将被提交到对应的 Executor 进行执行。
+这样,同一个引擎能够根据不同的计算场景提供不同的服务。比如常驻式引擎启动后不需要加锁,一次性引擎启动后不需要支持 Receiver 和访问状态等。
+
+d)	采用 Executor 和 EngineConn 分离的方式的好处是,可以避免 Receiver 耦合业务逻辑,本身只保留 RPC 通信功能。将服务分散在多个 Executor 模块中,并且抽象成几大类引擎:交互式计算引擎、流式引擎、一次性引擎等等可能用到的,构建成统一的引擎框架,便于后期的扩充。
+这样不同类型引擎可以根据需要分别加载其中需要的能力,大大减少引擎实现的冗余。
+
+&nbsp;&nbsp;&nbsp;&nbsp;  如下图所示:
+
+![Linkis EngineConn架构图](./../../en_US/Images/Architecture/Linkis1.0-EngineConn-architecture.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/Gateway.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/Gateway.md
new file mode 100644
index 0000000..f84d9dd
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/Gateway.md
@@ -0,0 +1,30 @@
+## Gateway 架构设计
+
+#### 简述
+Gateway网关是Linkis接受客户端以及外部请求的首要入口,例如接收作业执行请求,而后将执行请求转发到具体的符合条件的Entrance服务中去。
+整个架构底层基于SpringCloudGateway做扩展实现,上层叠加了与Http请求解析,会话权限,标签路由和WebSocket多路转发等相关的模组设计,整体架构可见如下。
+
+### 整体架构示意图
+
+![Gateway整体架构示意图](../../Images/Architecture/Gateway/gateway_server_global.png)
+
+#### 架构说明
+- gateway-core: Gateway的核心接口定义模块,主要定义了GatewayParser和GatewayRouter接口,分别对应请求的解析和根据请求进行路由选择;同时还提供了SecurityFilter的权限校验工具类。
+- spring-cloud-gateway: 该模块集成了所有与SpringCloudGateway相关的依赖,对HTTP和WebSocket两种协议类型的请求分别进行了处理转发。
+- gateway-server-support: Gateway的服务驱动模块,依赖spring-cloud-gateway模块,对GatewayParser、GatewayRouter分别做了实现,其中DefaultLabelGatewayRouter提供了请求标签路由的功能。
+- gateway-httpclient-support: 提供了Http访问Gateway服务的客户端通用类,z可以基于做多实现。
+- instance-label: 外联的实例标签模块,提供InsLabelService服务接口,用于路由标签的创建以及与应用实例关联。
+
+涉及的详细设计如下:
+
+#### 一、请求路由转发(带标签信息)
+请求的链路首先经SpringCloudGateway的Dispatcher分发后,进入网关的过滤器链表,进入GatewayAuthorizationFilter 和 SpringCloudGatewayWebsocketFilter 两大过滤器逻辑,过滤器集成了DefaultGatewayParser和DefaultGatewayRouter。
+从Parser到Router,执行相应的parse和route方法,DefaultGatewayParser和DefaultGatewayRouter内部还包含了自定义的Parser和Router,按照优先级顺序执行。最后由DefaultGatewayRouter输出路由选中的服务实例ServiceInstance,交由上层进行转发。
+现我们以具有标签信息的作业执行请求转发为例子,绘制如下流程图:  
+![Gateway请求路由转发](../../Images/Architecture/Gateway/gateway_server_dispatcher.png)
+
+
+#### 二、WebSocket连接转发管理
+默认情况下SpringCloudGateway对WebSocket请求只做一次路由转发,无法做动态的切换,而在Linkis Gateway架构下,每次信息的交互都会附带相应的uri地址,引导路由到不同的后端服务。
+除了负责与前端、客户端连接的webSocketService以及负责和后台服务连接的webSocketClient, 中间会缓存一系列GatewayWebSocketSessionConnection列表,一个GatewayWebSocketSessionConnection代表一个session会话与多个后台ServiceInstance的连接。  
+![Gateway的WebSocket转发管理](../../Images/Architecture/Gateway/gatway_websocket.png)
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/README.md
new file mode 100644
index 0000000..a5bbc92
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/README.md
@@ -0,0 +1,23 @@
+## **背景**
+
+微服务治理包含了Gateway、Eureka、Open Feign等三个主要的微服务。用来解决Linkis的服务发现与注册、统一网关、请求转发、服务间通信、负载均衡等问题。同时Linkis1.0还会提供对Nacos的支持;整个Linkis是一个完全的微服务架构,每个业务流程都是需要多个微服务协同完成的。
+
+## **架构图**
+
+![](../../Images/Architecture/linkis-microservice-gov-01.png)
+
+## **架构描述**
+
+1. Linkis Gateway作为Linkis的网关入口,主要承担了请求转发、用户访问认证、WebSocket通信等职责。Linkis1.0的Gateway还新增了基于Label的路由转发能力。Linkis在Spring
+Cloud Gateway中,实现了WebSocket路由转发器,用于与客户端建立WebSocket连接,建立连接成功后,会自动分析客户端的WebSocket请求,通过规则判断出请求该转发给哪个后端微服务,从而将WebSocket请求转发给对应的后端微服务实例。
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[进入Linkis Gateway](Gateway.md)
+
+2. Linkis Eureka
+主要负责服务注册与发现,Eureka由多个instance(服务实例)组成,这些服务实例可以分为两种:Eureka Server和Eureka Client。为了便于理解,我们将Eureka client再分为Service
+Provider和Service Consumer。Eureka Server 提供服务注册和发现,Service Provider服务提供方,将自身服务注册到Eureka,从而使服务消费方能够找到Service
+Consumer服务消费方,从Eureka获取注册服务列表,从而能够消费服务。
+
+3. Linkis基于Feign实现了一套自己的底层RPC通信方案。Linkis RPC作为底层的通信方案,将提供SDK集成到有需要的微服务之中。一个微服务既可以作为请求调用方,也可以作为请求接收方。作为请求调用方时,将通过Sender请求目标接收方微服务的Receiver,作为请求接收方时,将提供Receiver用来处理请求接收方Sender发送过来的请求,以便完成同步响应或异步响应。
+   
+   ![](../../Images/Architecture/linkis-microservice-gov-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Computation_Orchestrator_architecture.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Computation_Orchestrator_architecture.md
new file mode 100644
index 0000000..6787bb4
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Computation_Orchestrator_architecture.md
@@ -0,0 +1,18 @@
+## **Computation-Orchestrator架构**
+
+### **一. Computation-Orchestrator概念**
+
+Computation-Orchestrator是Orchestrator的标准实现,支持交互式引擎的任务编排。Computation-Orchestrator提供了Converter、Parser、Validator、Planner、Optimizer、Execution、Reheater的常用实现方法。Computation-Orchestrator与AM对接,负责交互式任务执行,可以与Entrance对接,也可以与其它任务提交端直接对接,比如IOClient。Computation-Orchestrator同时支持同步和异步方式提交任务,并且支持获取多个Session实现隔离,
+
+### **二. Computation-Orchestrator架构**
+
+Entrance提交任务到Computation-Orchestrator执行,会同时注册日志、进度和结果集的Listener。任务执行过程中,会收到任务日志、任务进度,都会调用已注册的listener,将任务信息返回给Entrance。任务执行结束后,会生成结果集的Response,并调用结果集Listener。其中,Orchestrator支持Entrance提交绑定单个EngineConn的任务,通过任务中添加BindEngineLabel实现。
+
+![](../../Images/Architecture/orchestrator/computation-orchestrator/linkis-computation-orchestrator-01.png)
+
+### **三. Computation-Orchestrator执行流程**
+
+Computation-Orchestrator执行流程如下图所示
+
+![](../../Images/Architecture/orchestrator/computation-orchestrator/linkis-computation-orchestrator-02.png)
+
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/1.0\344\270\255\347\224\250\346\210\267\351\234\200\345\256\236\347\216\260\347\232\204\346\216\245\345\217\243\345\222\214\347\261\273.png" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/1.0\344\270\255\347\224\250\346\210\267\351\234\200\345\256\236\347\216\260\347\232\204\346\216\245\345\217\243\345\222\214\347\261\273.png"
new file mode 100644
index 0000000..4830d0f
Binary files /dev/null and "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/1.0\344\270\255\347\224\250\346\210\267\351\234\200\345\256\236\347\216\260\347\232\204\346\216\245\345\217\243\345\222\214\347\261\273.png" differ
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\344\272\244\344\272\222\346\265\201\347\250\213.png" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\344\272\244\344\272\222\346\265\201\347\250\213.png"
new file mode 100644
index 0000000..9e76bdd
Binary files /dev/null and "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\344\272\244\344\272\222\346\265\201\347\250\213.png" differ
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\347\233\270\345\205\263\346\216\245\345\217\243\345\222\214\347\261\273.png" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\347\233\270\345\205\263\346\216\245\345\217\243\345\222\214\347\261\273.png"
new file mode 100644
index 0000000..0c20d81
Binary files /dev/null and "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\347\233\270\345\205\263\346\216\245\345\217\243\345\222\214\347\261\273.png" differ
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_CheckRuler.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_CheckRuler.md
new file mode 100644
index 0000000..6c89f13
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_CheckRuler.md
@@ -0,0 +1,27 @@
+CheckRuler架构设计
+======
+
+CheckRuler用于在Converter和Validator之前进行检查的规则,用于检验传递参数的的合法性和完整性,除了自带的几种必要的Ruler,其余可以根据用户自身需要进行实现。
+
+**Convert阶段:**
+
+| 类名                                     | 继承类               | 作用                    |
+|------------------------------------------|----------------------|-------------------------|
+| JobReqParamCheckRuler                    | ConverterCheckRulter | 校验提交的job参数完整性 |
+| PythonCodeConverterCheckRuler            | ConverterCheckRulter | Python代码规范性检测    |
+| ScalaCodeConverterCheckRuler             | ConverterCheckRulter | Scala代码规范检测       |
+| ShellDangerousGrammarConverterCheckRuler | ConverterCheckRulter | Shell脚本代码规范性检测 |
+| SparkCodeCheckConverterCheckRuler        | ConverterCheckRulter | Spark代码规范性检测     |
+| SQLCodeCheckConverterCheckRuler          | ConverterCheckRulter | SQL代码规范性检测       |
+| SQLLimitConverterCheckRuler              | ConverterCheckRulter | SQL代码长度检测         |
+| VarSubstitutionConverterCheckRuler       | ConverterCheckRulter | 变量替换规则校验        |
+
+**Validator阶段:**
+
+| 类名                          | 继承类                 | 作用                |
+|-------------------------------|------------------------|---------------------|
+| LabelRegularCheckRuler        | ValidatorCheckRuler    | Job的标签合法性校验 |
+| DefaultLabelRegularCheckRuler | LabelRegularCheckRuler | 实现类              |
+| RouteLabelRegularCheckRuler   | LabelRegularCheckRuler | 实现类              |
+
+如果需要自定义新的validator阶段的校验规则,自定义校验更多的标签类型,可以继承LabelRegularCheckRuler,重写customLabel值即可
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_ECMP_architecture.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_ECMP_architecture.md
new file mode 100644
index 0000000..6ea3abf
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_ECMP_architecture.md
@@ -0,0 +1,32 @@
+EngineConnPlugin架构设计
+------------------------
+
+EngineConnPlugin用于将原本实现一个新引擎,需要实现的相关接口和类,以及需要拆分的Entrance-EngineManager-Engine三层模块体系,融合到了一个接口之中,简化用户实现新引擎的流程和代码,真正做到只要实现一个类,就能接入一个新引擎。
+
+### EngineConnPlugin 架构实现
+
+1、Linkis 0.X版本痛点与思考
+
+Linkis
+0.X版本没有Plugin的概念,用户新增一个引擎,需要同时实现Entrance、EngineManager、Engine相关接口,开发工作量和维护工作量都较大,修改也比较复杂。
+
+以下是用户Linkis0.X实现一个新引擎需要实现的相关接口和类:
+
+![](Images/相关接口和类.png)
+
+2、新版本的改进
+
+Linkis
+1.0版本重构了引擎从创建到任务执行的整个逻辑,将Entrance简化为一个服务,通过标签来对接不同的Engine、EngineManager也会简化为一个。Engine定义为EngineConn连接器+Executor执行器,并且抽象成多个服务和模块,由用户根据需要灵活选取需要的服务和模块。这样大大减少了新增引擎的开发和维护工作量。并且plugin会将引擎的lib和conf动态添加到bml进行版本管理。
+
+以下为Linkis1.0.0,实现一个新引擎,用户需实现的接口和类:
+
+![](Images/1.0中用户需实现的接口和类.png)
+
+其中EngineConnResourceFactory和EngineLaunchBuilder为非必需实现接口,只有EngineConnFactory为必需实现接口。
+
+### EngineConnPlugin交互流程
+
+EngineConnPlugin提供了Server服务,用于启动和加载所有的引擎插件,以下给出了一个新引擎启动,访问了EngineConnPlugin-Server的全部流程:
+
+![](Images/交互流程.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Execution_architecture_doc.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Execution_architecture_doc.md
new file mode 100644
index 0000000..1bf3e5f
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Execution_architecture_doc.md
@@ -0,0 +1,19 @@
+Orchestrator-Execution架构设计
+===
+
+
+## 一. Execution概念
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Orchestrator-Execution模块是Orchestrator的执行模块,用于调度执行编排后的PhysicalTree,在执行的时候会从JobEndExecTask开始进行依赖执行。Execution的调用有Orchestration的执行和异步执行发起,然后Execution负责调度执行RootExecTask(PhysicalTree的根节点)整合树的ExecTask运行,并封装所有execTask的执行响应进行返回。执行采用生产者消费者异步执行模式进行运行。
+
+## 二. Execution架构
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Execution在接受到RootExecTask执行后,会将RootExecTask给到TaskManager进行调度执行(生产),然后TaskComsumer会从TaskManager获取现在可以依赖执行的任务进行消费执行,拿到可以运行的ExecTask后会提交给TaskScheduler进行提交执行。
+
+![execution](../../Images/Architecture/orchestrator/execution/execution.png)
+
+不管是异步执行和同步执行,都是通过上面的流程进行调度异步执行,同步执行会调用ExecTask的waitForCompleted方法,完成同步响应获取。整个执行过程中ExecTask的状态、结果集、日志等信息通过ListenerBus进行投递和通知。
+
+## 三. Execution整体流程
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Execution的整体执行流程如下所示,下图已交互式执行(ComputationExecution)流程为例:
+
+![execution01](../../Images/Architecture/orchestrator/execution/execution01.png)
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Operation_architecture_doc.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Operation_architecture_doc.md
new file mode 100644
index 0000000..94fd889
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Operation_architecture_doc.md
@@ -0,0 +1,26 @@
+Orchestrator-Operation架构设计
+===
+
+## 一. Operation概念
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Operation操作是用于扩展异步执行期间对任务的额外操作,在调用Orchestration的异步执行后,调用者获取到的是OrchestrationFuture,该接口里面只提供了cancel、waitForCompleted、getResponse等操作任务的方法。但是当我们需要获取任务日志、进度、暂停任务时没有调用人口,这也是Operation定义的初衷,用于对外扩展更多对异步运行的任务的额外能力。定义如下:
+
+
+## 二. Operation类图
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Operation采用的是用户扩展的方式,用户需要扩展操作时,只需要按照我们的Operation接口实现对应的类,然后注册到Orchestrator,不需要改动底层代码即可以拥有对应的操作。整体类图如下:
+
+![operation_class](../../Images/Architecture/orchestrator/operation/operation_class.png)
+
+
+## 三. Operation使用
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Operation的使用主要分为两步,首先是Operation注册,然后是Operation调用:
+1. 注册方式,首先是按照第二章的Operation接口实现对应的Operation实现类,然后通过`OrchestratorSessionBuilder`完成Operation的注册,这样通过`OrchestratorSessionBuilder`创建出来的OrchestratorSession中的SessionState是持有Operation的;
+2. Operation的使用需要在使用通过OrchestratorSession完成编排后,调用Orchestration的异步执行方法asyncExecute获取OrchestrationFuture才可以进行;
+3. 接着通过Operation操作name,如“LOG”日志,调用`OrchestrationFuture.operate("LOG")` 进行操作获取对应Operation的返回对象,
+
+## 四. Operation例子
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;以下通过日志操作来为例进行说明,LogOperation的定义在第二章有说明,LogOperation通过实现Operation和TaskLogListener两个接口。整体日志获取流程如下:
+1. 当Orchestrator接收到任务日志后,会通过listenerBus推送event给到LogOperation进行消费;
+2. 当LogOperation获取到日志后,会调用日志处理器LogProcessor进行写日志(writeLog),该LogProcessor会通过调用方调用方法`OrchestrationFuture.operate("LOG")`获取到;
+3. LogProcessor有两种给到外部获取日志的方式,一种是通知模式,外部调用方可以注册日志listener方法给到日志处理器,当日志处理器的writeLog方法被调用后后会调用所有的listener进行通知
+4. 一种是主动拉取模式,通过调用LogProcessor的getLog方法主动获取日志
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Reheater_architecture.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Reheater_architecture.md
new file mode 100644
index 0000000..0eba15a
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Reheater_architecture.md
@@ -0,0 +1,12 @@
+## **Orchestrator Reheater架构**
+
+### **一. Reheater概念**
+
+Orchestrator-Reheater模块是Orchestrator的重放模块,用于在执行过程中,动态调整JobGroup的执行计划,为JobGroup动态添加Job、Stage和Task。从而避免网络等原因引起的子任务失败。目前主要有任务相关的TaskReheater,包含重试类型的RetryTaskReheater
+
+### **二. Reheater架构图**
+
+![](../../Images/Architecture/orchestrator/reheater/linkis-orchestrator-reheater-01.png)
+
+Reheater在任务执行过程中,会收到ReheaterEvent,从而会对编排后的PhysicalTree进行调整,动态添加Job、Stage、Task。目前常用的有TaskReheater,包含重试类型的RetryTaskReheater、切换类型的SwitchTaskReheater,以及执行失败任务时的任务信息写入PlaybackService的PlaybackWrittenTaskReheater。
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Transform_architecture.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Transform_architecture.md
new file mode 100644
index 0000000..bbf0ef3
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Transform_architecture.md
@@ -0,0 +1,12 @@
+## **Orchestrator-Transform架构**
+
+### **一. Transtform概念**
+
+Orchestrator中定义了任务调度编排不同阶段的结构,从ASTTree到LogicalTree,再到PhysicalTree,这些不同结构的转换,需要用到Transform模块。Transform模块定义了转换过程,Convert需要调用各种Transform,来进行任务结构的转换和生成。
+
+## **二. Transform架构**
+
+Transform嵌入在整个转换过程中,从Parser到Execution,每个阶段间会有Transform的实现类,分别将初始的JobReq转换成ASTTree、LogicalTree和PhysicalTree,PhysicalTree提交Execution执行。
+
+![](../../Images/Architecture/orchestrator/transform/linkis-orchestrator-transform-01.png)
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_architecture_doc.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_architecture_doc.md
new file mode 100644
index 0000000..c4b14ad
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_architecture_doc.md
@@ -0,0 +1,113 @@
+Orchestrator 整体架构设计
+===
+
+## 一. Orchestrator概念
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Orchestrator计算编排是Linkis1.0的核心价值实现,基于Orchestrator可以实现全栈引擎+丰富计算策略的支持,通过对用户提交的任务进行编排,可以实现对双读、双写、AB等策略类型进行支持。并通过和标签进行配合可以对多种任务场景进行支持:
+- 当Orchestrator模块和Entrance进行结合的时候,可以完成对0.X的交互式计算场景进行支持;
+- 当Orchestrator模块和引擎连接器EngineConn进行结合的时候,可以完成对常驻式和一次性作业场景进行支持;
+- 当Orchestrator模块和Linkis-Client进行对接时,作为RichClient可以对存储式作业场景进行支持,如支持Hbase的双读双写;
+
+![Orchestrator01](../../Images/Architecture/orchestrator/overall/Orchestrator01.png)
+
+## 二. Orchestrator整体架构:
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Orchestrator编排整体架构参考Apache Calcite的架构进行实现将一个任务编排划分了如下几步:
+- Converter(转换):完成对用户提交的JobReq(任务请求)装换为编排的Job,该步骤会对用户提交的Job进行参数检查和信息补充,如变量替换等
+- Pareser(解析):完成对Job的解析,并拆封装Job的Stage信息,形成Ast树
+- Validator(校验): 完成对Job和Stage的信息检验,如必须的Label信息检验
+- Planner(计划):完成对Ast阶段的Job和Stage的对象转换为Logical计划,形成Logical树,将Job和Stage分别转换为LogicalTask,并封装执行单元的LogicalTask,如对于交互式的CodeLogicalUnit,转为为CodeLogicalUnitTask
+- Optimizer(优化阶段):完成对Logical Tree转换为Physical Tree,并对树进行优化,如命中缓存型的优化
+- Execution(执行):调度执行物理计划的Physical Tree,按照依赖进行执行
+- Reheater(再热):检测在执行阶段的可重试的失败Task(如ReTryExecption),调整物理计划重新执行
+- Plugins(插件): 插件模块,主要用于Orchestrator对接外部模块进行使用,如EngineConnManagerPlugin用于对接LinkisManager和EngineConn完成对引擎的申请和任务执行\
+
+![Orchestrator_arc](../../Images/Architecture/orchestrator/overall/Orchestrator_arc.png)
+
+## 三. Orchestrator实体流转:
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Orchestrator编排过程中,主要是完成对输入的JobReq进行转换,主要分为AST、Logical、Physical三个阶段,最终执行的是Physical阶段的ExecTask。整个过程如下:
+
+![orchestrator_entity](../../Images/Architecture/orchestrator/overall/orchestrator_entity.png)
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;以下以交互式场景为例简单介绍,下面以codeLogicalUnit:`select * from test`的交互式Job为例,可视化各个阶段的树形图
+1. AST阶段:由Parser对ASTJob进行解析后的结构,Job和Stage有属性进行关联,Job里面有getStage信息,Stage里面有Job信息,不是通过parents和children决定(parents和children都为null):
+
+![Orchestrator_ast](../../Images/Architecture/orchestrator/overall/Orchestrator_ast.png)
+
+2. Logical阶段:由Plan对ASTJob进行转换后的结构,包含Job/stage/CodeTask,存在树形结构,关系由parents和children进行决定\,start和end由Desc决定:
+
+![Orchestrator_Logical](../../Images/Architecture/orchestrator/overall/Orchestrator_Logical.png)
+
+3. Physical阶段:由Optimizer转换后的结构,包含Job/Stage/Code ExecTask,存在树形结构,关系由parents和children进行决定\,start和end由Desc决定:
+
+![Orchestrator_Physical](../../Images/Architecture/orchestrator/overall/Orchestrator_Physical.png)
+
+## 四. Orchestrator Core各层级模块详解
+
+### 4.1 Converter模块:
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Converter主要用于将一个JobReq转换成一个Job,并完成对JobReq的检查和补充、包括参数检查、变量补充等。JobReq是用户实际提交的一个作业,这个作业可以是交互式作业(这时Orchestrator会与Entrance进行集成,对外提供交互式访问能力),也可以是常驻式/一次性作业(这时Orchestrator会与EngineConn进行集成,直接对外提供执行能力),也可以是存储式作业,这时Orchestrator会与Client进行集成,将直接与EngineConn进行对接。相对应的JobReq有很多实现类,基于场景类型分为ComputationJobReq(交互式)、ClusteredJobReq(常驻式)和StorageJobReq(存储型)。
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 这里需区分一下Orchestrator和Entrance的职责范围,一般情况下,Orchestrator对于RichClient、Entrance、EngineConn是必需单元,但是Entrance则不是必需的,所以Converter会提供一系列的检查拦截单元,用于自定义变量的替换和CS相关文件、自定义变量的补充。
+
+### 4.2 Parser模块:
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Parser主要用于将一个Job解析为多个Stage,按照不能的计算策略,在Parser阶段生成的AstTree也会不相同,对于普通的交互式计算策略Parser会将Job解析为一个Stage,但是对于双读、双写等计算策略下会将Job解析为多个Stage,每个Stage对应的操作相同去操作不同的集群。
+
+### 4.3 Validator模块:
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; AstTree在plan生成可执行的Tasks之前,还需先经过Validator。Validator主要用于校验Ast阶段的Job和Stage的合法性,并补充一些必要的信息,例如必要标签信息检查和补充。
+
+### 4.4 Planner模块
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Planner模块主要完成对Ast阶段的Job和Stage转换为对应的LogicalTask,形式LogicalTree。Planner会构造LogicalTree,将Job解析为JobEndTask和JobStartTask,将Stage解析为StageEndTask和StageStartTask,以及将实际的执行单元转换为具体的LogicalTask(如对于交互式的CodeLogicalUnit,转为为CodeLogicalUnitTask)。如下图:
+
+![Orchestrator_Logical](../../Images/Architecture/orchestrator/overall/Orchestrator_Logical.png)
+
+### 4.5 Optimizer模块
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Optimizer是Orchestrator的优化器,主要用于优化整个LogicalTree转换为PhysicalTree的ExecTask。根据优化的类型不同,Optimizer主要分为两个步骤:第一步是完成对logciaTree的优化,第二部完成对LogicalTree的转换。已经实现的优化策略主要有以下:
+- CacheTaskOptimizer(TaskOptimizer级):判断ExecTask是否可以使用缓存的执行结果,如果命中cache,则调整Tree。
+- YarnQueueOptimizer(TaskOptimizer级):如果用户指定提交的队列现在资源很紧张,且该用户存在其他可用空闲队列,自动为用户做优化。
+- PlaybackOptimizer(TaskOptimizer级):主要用于支持回放。即多写时,如果某个集群存在需要回放的任务,先根据任务时延要求,进行一定数量的任务回放,以便追回。同时对该任务进行关联分析,如果与历史回放任务关联则改为将任务信息写入PlaybackService(或如果是select类别的不执行),不关联则继续执行。
+- ConfigurationOptimizer(StageOptimizer级):优化用户的运行时参数或启动参数。
+
+
+### 4.6 Execution模块
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Execution是Orchestrator的执行模块,用于执行PhysicalTree,支持同步执行和异步执行,执行的过程中通过解析PhysicalTree进行依赖执行。
+
+### 4.7 Reheater模块
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Reheater再热允许Execution在执行过程中,动态调整PhysicalTree的执行计划,比如为申请引擎失败的ExecTask发起重新执行等
+
+## 五. Orchestrator编排流程
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 对于使用方来说整体编排分为三步:
+1. 第一步通过Orchestrator获取OrchestratorSession该对象类似于SparkSession一般进程单例
+2. 第二步通过OrchestratorSession进行编排,获取Orchestration对象,编排后返回的唯一对象
+3. 第三步通过调用Orchestration 的执行方法机进行支持,支持异步和同步执行模式
+整体流程如下图所示:
+
+![Orchestrator_progress](../../Images/Architecture/orchestrator/overall/Orchestrator_progress.png)
+
+## 六. Orchestrator常用物理计划示例
+
+1. 交互式分析,拆封成两个Stage的类型
+
+![Orchestrator_computation](../../Images/Architecture/orchestrator/overall/Orchestrator_computation.png)
+
+2. Command等只有function类的ExecTask
+
+![Orchestrator_command](../../Images/Architecture/orchestrator/overall/Orchestrator_command.png)
+
+3. Reheat情型
+
+![Orchestrator_reheat](../../Images/Architecture/orchestrator/overall/Orchestrator_reheat.png)
+
+4. 事务型
+
+![Orchestrator_transication](../../Images/Architecture/orchestrator/overall/Orchestrator_transication.png)
+
+5. 命中缓存型
+
+![Orchestrator_cache](../../Images/Architecture/orchestrator/overall/Orchestrator_cache.png)
+
+
+
+
+
+
+
+
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/README.md
new file mode 100644
index 0000000..4ca01b2
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/README.md
@@ -0,0 +1,55 @@
+## Orchestrator 架构设计
+
+Linkis的计算编排模块,提供了全栈引擎和丰富的计算策略的支持,通过编排方式实现对双读、双写、AB等策略的支持;并且通过与标签系统整合实现对多种作业场景,例交互式计算作业、常驻式作业以及存储式作业等场景的支持。
+
+#### 架构示意图
+
+![Orchestrator架构图](../../Images/Architecture/orchestrator/linkis_orchestrator_architecture.png)  
+
+
+#### 模块介绍
+
+##### 1. Orchestrator-Core
+
+核心模块,将任务编排拆分了约七个步骤,分别对应的接口为Converter(转换), Parser(解析), Validator(校验), Planner(计划), Optimizer(优化),Execution(执行), Reheater(再热/重试),之间的实体流转图见如下:  
+![Orchestrator实体流转](../../Images/Architecture/orchestrator/overall/orchestrator_entity.png)
+
+核心的接口定义如下:
+
+| 核心顶层接口/类 | 核心功能 |
+| --- | --- | 
+| `ConverterTransform`| 完成对用户提交的req请求转换为编排的Job,同时会对请求做参数检查和信息补充 |
+| `ParserTransform`| 完成对Job的解析和拆分,拆分成多个Stage阶段信息,构成AST树 |
+| `ValidatorTransform` | 对Job和Stage的信息校验,例如对附带的Label信息的校验 |
+| `PlannerTransform` | 将AST阶段的Job和Stage转换成逻辑计划,生成Logical树,其中Job和Stage分别转换为LogicalTask |
+| `OptimizerTransform` | 完成Logical Tree到 Physical Tree的转换,既物理计划转换, 转换前还会对AST树做优化处理 |
+| `Execution` | 调度执行物理计划的Physical Tree,处理执行子作业之间的依赖关系 |
+| `ReheaterTransform` | 对Execution执行过程中可重试的失败作业的重新调度执行 |
+
+##### 2. Computation-Orchestrator
+
+是针对交互式计算场景下Orchestrator的标准实现,对抽象接口都做了默认实现,其中包含例如对SQL等语言代码的转换规则集合,以及请求执行交互式作业的具体逻辑。
+典型的类定义如下:
+
+| 核心顶层接口/类 | 核心功能 |
+| --- | --- | 
+| `CodeConverterTransform`| 针对请求中附带的代码信息的解析转换, 例如 Spark Sql, Hive Sql, Shell 和 Python|
+| `CodeStageParserTransform` | 解析拆分Job,针对CodeJob,既附带代码信息的Job|
+| `EnrichLabelParserTransform` | 解析拆分Job的同时填入标签信息 |
+| `TaskPlannerTransform` | 交互式计算场景下,将Job拆分成的Stage信息转化为逻辑计划,即Logical Tree |
+| `CacheTaskOptimizer` | 对逻辑计划中的AST树增加缓存节点,优化后续的执行 |
+| `ComputePhysicalTransform` | 交互式计算场景下,将逻辑计划转化为物理计划 |
+| `CodeLogicalUnitExecTask` | 交互式计算场景下,物理计划中的最小执行单元|
+| `ComputationTaskExecutionReceiver` | Task执行的RPC回调类,接收任务的状态、进度等回调信息|
+
+##### 3. Code-Orchestrator
+
+是针对常驻型和存储型作业场景下Orchestrator的标准实现
+
+##### 4. Plugins/Orchestrator-ECM-Plugin
+
+提供了Orchestrator对接LinkisManager 和 EngineConn所需要的接口方法,简述如下:
+
+| 核心顶层接口/类 | 核心功能 |
+| --- | --- | 
+| `EngineConnManager` | 提供了请求EngineConn资源,向EngineConn提交执行请求的方法,并主动缓存了可用的EngineConn|
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/BML.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/BML.md
new file mode 100644
index 0000000..e385cad
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/BML.md
@@ -0,0 +1,94 @@
+
+## 背景
+
+BML(物料库服务)是linkis的物料管理系统,主要用来存储用户的各种文件数据,包括用户脚本、资源文件、第三方Jar包等,也可以存储引擎运行时需要使用到的类库。
+
+具备以下功能点:
+
+1)、支持各种类型的文件。支持文本形式和二进制形式的文件,如果是在大数据领域的用户,可以将他们的脚本文件、物料压缩包都可以存储到本系统中。
+
+2)、服务无状态,多实例部署,做到服务高可用。本系统在部署的时候,可以进行多实例部署,每个实例对外独立提供服务,不会互相干扰,所有的信息都是存储在数据库中进行共享。
+
+3)、使用方式多样。提供Rest接口和SDK两种方式,用户可以根据自己的需要进行选择。
+
+4)、文件采用追加方式,避免过多的HDFS小文件。HDFS小文件多会导致HDFS整体性能的下降,我们采用了文件追加的方式,将多个版本的资源文件合成一个大文件,有效减少了HDFS的文件数量。
+
+5)、精确权限控制,用户资源文件内容安全存储。资源文件往往会有重要的内容,用户只希望自己可读
+
+6)、提供了文件上传、更新、下载等操作任务的生命周期管理。
+
+## 架构图
+
+![BML架构图](../../Images/Architecture/bml-02.png)
+
+## 架构说明
+
+1、Service层 包含资源管理、上传资源、下载资源、共享资源还有工程资源管理。
+
+资源管理负责资源的增删改查操作,访问权限控制,文件是否过期等基本操作。
+
+2、文件版本控制
+每个BML资源文件都是具有版本信息的,同一个资源每次更新操作都会产生一个新的版本,当然也支持历史版本的查询和下载操作。BML使用版本信息表记录了每个版本的资源文件HDFS存储的偏离位置和大小,可以在一个HDFS文件上存储多个版本的数据。
+
+3、资源文件存储
+主要使用HDFS文件作为实际的数据存储,HDFS文件可以有效保证物料库文件不被丢失,文件采用追加方式,避免过多的HDFS小文件。
+
+### 核心流程
+
+**上传文件:**
+
+1.  判断用户上传文件的操作类型,属于首次上传还是更新上传,如果是首次上传需要新增一条资源信息记录,系统已经为这个资源生成了一个全局唯一标识的resource_id和一个资源放置的位置resource_location。资源A的第一个版本A1需要在HDFS文件系统中resource_location位置进行存储。存储完之后,就可以得到第一个版本记为V00001,如果是更新上传需要查找上次最新的版本。
+
+2.  上传文件流到指定的HDFS文件,如果是更新则采用文件追加的方式加到上次内容的末尾。
+
+3.  新增一条版本记录,每次上传都会产生一条新的版本记录。除了记录这个版本的元数据信息外,最重要的是记录了该版本的文件的存储位置,包括文件路径,起始位置,结束位置。
+
+**下载文件:**
+
+1.  用户下载资源的时候,需要指定两个参数一个是resource_id,另外一个是版本version,如果不指定version的话,默认下载最新版本。
+
+2.  用户传入resource_id和version两个参数到系统之后,系统查询resource_version表,查到对应的resource_location和start_byte和end\_byte进行下载,通过流处理的skipByte方法,将resource\_location的前(start_byte-1)个字节跳过,然后读取到end_byte
+    字节数。读取成功之后,将流信息返回给用户。
+
+3.  在resource_download_history中插入一条下载成功的记录
+
+## 数据库设计
+
+1、资源信息表(resource)
+
+| 字段名            | 作用                         | 备注                             |
+|-------------------|------------------------------|----------------------------------|
+| resource_id       | 全局唯一标识一个资源的字符串 | 可以采用UUID进行标识             |
+| resource_location | 存放资源的位置               | 例如 hdfs:///tmp/bdp/\${用户名}/ |
+| owner             | 资源的所属者                 | 例如 zhangsan                    |
+| create_time       | 记录创建时间                 |                                  |
+| is_share          | 是否共享                     | 0表示不共享,1表示共享           |
+| update\_time      | 资源最后的更新时间           |                                  |
+| is\_expire        | 记录资源是否过期             |                                  |
+| expire_time       | 记录资源过期时间             |                                  |
+
+2、资源版本信息表(resource_version)
+
+| 字段名            | 作用               | 备注     |
+|-------------------|--------------------|----------|
+| resource_id       | 唯一标识资源       | 联合主键 |
+| version           | 资源文件的版本     |          |
+| start_byte        | 资源文件开始字节数 |          |
+| end\_byte         | 资源文件结束字节数 |          |
+| size              | 资源文件大小       |          |
+| resource_location | 资源文件放置位置   |          |
+| start_time        | 记录上传的开始时间 |          |
+| end\_time         | 记录上传的结束时间 |          |
+| updater           | 记录更新用户       |          |
+
+3、资源下载历史表(resource_download_history)
+
+| 字段        | 作用                      | 备注                           |
+|-------------|---------------------------|--------------------------------|
+| resource_id | 记录下载资源的resource_id |                                |
+| version     | 记录下载资源的version     |                                |
+| downloader  | 记录下载的用户            |                                |
+| start\_time | 记录下载时间              |                                |
+| end\_time   | 记录结束时间              |                                |
+| status      | 记录是否成功              | 0表示成功,1表示失败           |
+| err\_msg    | 记录失败原因              | null表示成功,否则记录失败原因 |
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md
new file mode 100644
index 0000000..d28cbe2
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md
@@ -0,0 +1,95 @@
+## **CSCache架构**
+### **需要解决的问题**
+
+###  1.1. 内存结构需要解决的问题:
+
+1. 支持按ContextType进行拆分:加快存储和查询性能
+
+2. 支持按不同得ContextID进行拆分:需要完成ContextID见元数据隔离
+
+3. 支持LRU:按照特定算法进行回收
+
+4. 支持按关键字进行检索:支持通过关键字进行索引
+
+5. 支持索引:支持直接通过ContextKey进行索引
+
+6. 支持遍历:需要支持通过按照ContextID、ContextType进行遍历
+
+###  1.2 加载与解析需要解决的问题:
+
+1. 支持将ContextValue解析成内存数据结构:需要完成对ContextKey和value解析出对应的关键字。
+
+2. 需要与与Persistence模块进行对接完成ContextID内容的加载与解析
+
+###  1.3 Metric和清理机制需要解决的问题:
+
+1. 当JVM内存不够时能够基于内存使用和使用频率的清理
+
+2. 支持统计每个ContextID的内存使用情况
+
+3. 支持统计每个ContextID的使用频率
+
+## **ContextCache架构**
+
+ContextCache的架构如下图展示:
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png)
+
+1.  ContextService:完成对外接口的提供,包括增删改查;
+
+2.  Cache:完成对上下文信息的存储,通过ContextKey和ContextValue进行映射存储
+
+3.  Index:建立的关键字索引,存储的是上下文信息的关键字和ContextKey的映射;
+
+4.  Parser:完成对上下文信息的关键字解析;
+
+5.  LoadModule当ContextCache没有对应的ContextID信息时从持久层完成信息的加载;
+
+6.  AutoClear:当Jvm内存不足时完成对ContextCache进行按需清理;
+
+7.  Listener:用于手机ContextCache的Metric信息,如:内存占用、访问次数。
+
+## **ContextCache存储结构设计**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png)
+
+ContextCache的存储结构划分为了三层结构:
+
+**ContextCach:**存储了ContextID和ContextIDValue的映射关系,并能够完成ContextID按照LRU算法进行回收;
+
+**ContextIDValue:**拥有存储了ContextID的所有上下文信息和索引的CSKeyValueContext。并统计ContestID的内存和使用记录。
+
+**CSKeyValueContext:**包含了按照类型存储并支持关键词的CSInvertedIndexSet索引集,还包含了存储ContextKey和ContextValue的存储集CSKeyValueMapSet。
+
+CSInvertedIndexSet:通过CSType进行分类存储关键词索引
+
+CSKeyValueMapSet:通过CSType进行分类存储上下文信息
+
+## **ContextCache UML类图设计**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png)
+
+## **ContextCache 时序图**
+
+下面的图绘制了以ContextID、KeyWord、ContextType去ContextCache中查对应的ContextKeyValue的整体流程。
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png)
+
+说明:其中ContextIDValueGenerator会去持久层拉取ContextID的Array[ContextKeyValue],并通过ContextKeyValueParser解析ContextKeyValue的关键字存储索引和内容。
+
+ContextCacheService提供的其他接口流程类似,这里不再赘述。
+
+## **KeyWord解析逻辑**
+
+ContextValue具体的实体Bean需要在对应可以作为keyword的get方法上面使用注解\@keywordMethod,比如Table的getTableName方法必须加上\@keywordMethod注解。
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png)
+
+ContextKeyValueParser在解析ContextKeyValue的时候,会去扫描传入的具体对象的所有被KeywordMethod修饰的注解并调用该get方法获得返回对象toString并会通过用户可选的规则进行解析,存入keyword集合里面。规则有分隔符,和正则表达式
+
+注意事项:
+
+1.  该注解会定义到cs的core模块
+
+2.  被修饰的Get方法不能带参数
+
+3.  Get方法的返回对象的toSting方法必须返回的是关键字
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md
new file mode 100644
index 0000000..d72a37c
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md
@@ -0,0 +1,61 @@
+## **CSClient设计的思路和实现**
+
+
+CSClient是每一个微服务和CSServer组进行交互的客户端,CSClient需要满足下面的功能。
+
+1.  微服务向cs-server申请一个上下文对象的能力
+
+2.  微服务向cs-server注册上下文信息的能力
+
+3.  微服务能够向cs-server更新上下文信息的能力
+
+4.  微服务向cs-server获取上下文信息的能力
+
+5.  某一些特殊的微服务能够嗅探到cs-server中已经修改了上下文信息的操作
+
+6.  CSClient在csserver集群都失败的情况下能够给出明确的指示
+
+7.  CSClient需要提供复制csid1所有上下文信息为一个新的csid2用来提供给调度执行的
+
+>   总体的做法是通过的linkis自带的linkis-httpclient进行发送http请求,通过实现各种Action和Result的实体类进行发送请求和接收响应。
+
+### 1. 申请上下文对象的能力
+
+申请上下文对象,例如用户在前端新建了一条工作流,dss-server需要向dss-server申请一个上下文对象,申请上下文对象的时候,需要将工作流的标识信息(工程名、工作流名)通过CSClient发送到CSServer中(这个时候gateway应该是随机发送给一个的,因为此时没有携带csid的信息),申请上下文一旦反馈到正确的结果之后,就会返回一个csid和该工作流进行绑定。
+
+### 2. 注册上下文信息的能力
+
+>   注册上下文的能力,例如用户在前端页面上传了资源文件,文件内容上传到dss-server,dss-server将内容存储到bml中,然后需要将从bml中获得的resourceid和version注册到cs-server中,此时需要使用到csclient的注册的能力,注册的能力是通过传入csid,以及cskey
+>   和csvalue(resourceid和version)进行注册。
+
+### 3. 更新注册的上下文的能力
+
+>   更新上下文信息的能力。举一个例子,比如一个用户上传了一个资源文件test.jar,此时csserver已经有注册的信息,如果用户在编辑工作流的时候,将这个资源文件进行了更新,那么cs-server需要将这个内容进行更新。此时需要调用csclient的更新的接口
+
+### 4. 获取上下文的能力
+
+注册到csserver的上下文信息,在变量替换、资源文件下载、下游节点调用上游节点产生信息的时候,都是需要被读取的,例如engine端在执行代码的时候,需要进行下载bml的资源,此时需要通过csclient和csserver进行交互,获取到文件在bml中的resourceid和version然后再进行下载。
+
+### 5. 某一些特殊的微服务能够嗅探到cs-server中已经修改了上下文信息的操作
+
+这个操作是基于以下的例子,比如一个widget节点和上游的sql节点是有很强的联动性,用户在sql节点中写了一个sql,sql的结果集的元数据为a,b,c三个字段,后面的widget节点绑定了这个sql,能够在页面中进行对这三个字段的编辑,然后用户更改了sql的语句,元数据变成了a,b,c,d四个字段,此时用户需要手动刷新一下才行。我们希望做到如果脚本做到了改变,那么widget节点能够自动的进行将元数据进行更新。这个一般采用的是listener模式,为了简便,也可以采用心跳的机制进行轮询。
+
+### 6. CSClient需要提供复制csid1所有上下文信息为一个新的csid2用来提供给调度执行的
+
+用户一旦发布一个工程,就是希望对这个工程的所有信息进行类似于git打上一个tag,这里的资源文件、自定义变量这些都是不会再变的,但是有一些动态信息,如产生的结果集等还是会更新csid的内容。所以csclient需要提供一个csid1复制所有上下文信息的接口以供微服务进行调用
+
+## **ClientListener模块的实现**
+
+对于一个client而言,有时候会希望在尽快的时间内知道某一个csid和cskey在cs-server中发生了改变,例如visualis的csclient需要能够知道上一个sql节点进行了改变,那么需要被通知到,服务端有一个listener模块,而客户端也需要一个listener模块,例如一个client希望能够监听到某一个csid的某一个cskey的变化,那么他需要将该cskey注册到对应的csserver实例中的callbackEngine,后续的比如有另外一个client进行更改了该cskey的内容,第一个client进行了heatbeat的时候,callbackengine就需要将这个信息通知到已经client监听的所有cskey,这样的话,第一个client就知道了该cskey的内容已经发生了变化。当heatbeat返回数据的时候,我们就应该通知到注册到ContextClientListenerBus的所有的listener进行使用on方法
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png)
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png)
+
+## **GatewayRouter的实现**
+
+
+Gateway插件实现Context进行转发Gateway的插件的转发逻辑是通过的GatewayRouter进行的,需要分成两种方式进行,第一种是申请一个context上下文对象的时候,这个时候,CSClient携带的信息中是没有包含csid的信息的,此时的判断逻辑应该是通过eureka的注册信息,第一次发送的请求将会随机进入到一个微服务实例中。  
+第二种情况是携带了ContextID的内容,我们需要将csid进行解析,解析的方式就是通过字符串切割的方法,获取到每一个instance的信息,然后通过instance的信息通过eureka判断是否还存在这个微服务,如果是存在的,就往这个微服务实例进行发送
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md
new file mode 100644
index 0000000..05a165f
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md
@@ -0,0 +1,86 @@
+## **CS HA架构设计**
+
+### 1,CS HA架构概要
+
+#### (1)CS HA架构图
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png)
+
+#### (2)要解决的问题
+
+-   Context instance对象的HA
+
+-   Client创建工作流时生成CSID请求
+
+-   CS Server的别名列表
+
+-   CSID统一的生成和解析规则
+
+#### (3)主要设计思路
+
+①负载均衡
+
+当客户端创建新的工作流时,等概率随机请求到某台Server的HA模块生成新的HAID,HAID信息包含该主Server信息(以下称主instance),和备选instance,其中备选instance为剩余Server中负载最低的instance,以及一个对应的ContextID。生成的HAID与该工作流绑定且被持久化到数据库,并且随后该工作流所有变更操作请求都将发送至主instance,实现负载的均匀分配。
+
+②高可用
+
+在后续操作中,当客户端或者gateway判定主instance不可用时,会将操作请求转发至备instance处理,从而实现服务的高可用。备instance的HA模块会根据HAID信息首先验证请求合法性。
+
+③别名机制
+
+对机器采用别名机制,HAID中包含的Instance信息采用自定义别名,后台维护别名映射队列。在于客户端交互时采用HAID,而与后台其它组件交互则采用ContextID,在实现具体操作时采用动态代理机制,将HAID转换为ContextID进行处理。
+
+### 2,模块设计
+
+#### (1)模块图
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png)
+
+#### (2)具体模块
+
+①ContextHAManager模块
+
+提供接口供CS Server调用生成CSID及HAID,并提供基于动态代理的别名转换接口;
+
+调用持久化模块接口持久化CSID信息;
+
+②AbstractContextHAManager模块
+
+ContextHAManager的抽象,可支持实现多种ContextHAManager;
+
+③InstanceAliasManager模块
+
+RPC模块提供Instance与别名转换接口,维护别名映射队列,并提供别名与CS
+Server实例的查询;提供验证主机是否有效接口;
+
+④HAContextIDGenerator模块
+
+生成新的HAID,并且封装成客户端约定格式返回给客户端。HAID结构如下:
+
+\${第一个instance长度}\${第二个instance长度}{instance别名1}{instance别名2}{实际ID},实际ID定为ContextID
+Key;
+
+⑤ContextHAChecker模块
+
+提供HAID的校验接口。收到的每个请求会校验ID格式是否有效,以及当前主机是否为主Instance或备Instance:如果是主Instance,则校验通过;如果为备Instance,则验证主Instance是否失效,主Instance失效则验证通过。
+
+⑥BackupInstanceGenerator模块
+
+生成备用实例,附加在CSID信息里;
+
+⑦MultiTenantBackupInstanceGenerator接口
+
+(保留接口,暂不实现)
+
+### 3. UML类图
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png)
+
+### 4. HA模块操作时序图
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png)
+
+第一次生成CSID:
+由客户端发出请求,Gateway转发到任一Server,HA模块生成HAID,包含主Instance和备instance及CSID,完成工作流与HAID的绑定。
+
+当客户端发送变更请求时,Gateway判定主Instance失效,则将请求转发到备Instance进行处理。备Instance上实例验证HAID有效后,加载Instance并处理请求。
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md
new file mode 100644
index 0000000..74329c1
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md
@@ -0,0 +1,33 @@
+## **Listener架构**
+
+在DSS中,当某个节点更改了它的元数据信息后,则整个工作流的上下文信息就发生了改变,我们期望所有的节点都能感知到变化,并自动进行元数据更新。我们采用监听模式来实现,并使用心跳机制进行轮询,保持上下文信息的元数据一致性。
+
+### **客户端 注册自己、注册CSKey及更新CSKey过程**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png)
+
+主要过程如下:
+
+1、注册操作:客户端client1、client2、client3、client4通过HTPP请求分别向csserver注册自己以及想要监听的CSKey,Service服务通过对外接口获取到callback引擎实例,注册客户端及其对应的CSKeys。
+
+2、更新操作:如ClientX节点更新了CSKey内容,Service服务则更新ContextCache缓存的CSKey,ContextCache将更新操作投递给ListenerBus,ListenerBus通知具体的listener进行消费(即ContextKeyCallbackEngine去更新Client对应的CSKeys),超时未消费的事件,会被自动移除。
+
+3、心跳机制:
+
+所有Client通过心跳信息探测ContextKeyCallbackEngine中CSKeys的值是否发生了变化。
+
+ContextKeyCallbackEngine通过心跳机制返回更新的CSKeys值给所有已注册的客户端。如果有客户端心跳超时,则移除该客户端。
+
+### **Listener UM类图**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png)
+
+接口:ListenerManager
+
+对外:提供ListenerBus,用于投递事件。
+
+对内:提供 callback引擎,进行事件的具体注册、访问、更新,及心跳处理等逻辑
+
+## **Listener callbackengine时序图**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md
new file mode 100644
index 0000000..13fae2f
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md
@@ -0,0 +1,8 @@
+## **CSPersistence架构**
+
+### Persistence UML图
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png)
+
+
+Persistence模块主要定义了ContextService持久化相关操作。实体主要包含CSID、ContextKeyValue相关、CSResource相关、CSTable相关。
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md
new file mode 100644
index 0000000..073cfd7
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md
@@ -0,0 +1,127 @@
+## **CSSearch架构**
+### **总体架构**
+
+如下图所示:
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png)
+
+1.  ContextSearch:查询入口,接受Map形式定义的查询条件,根据条件返回相应的结果。
+
+2.  构建模块:每个条件类型对应一个Parser,负责将Map形式的条件转换成Condition对象,具体通过调用ConditionBuilder的逻辑实现。具有复杂逻辑关系的Condition会通过ConditionOptimizer进行基于代价的算法优化查询方案。
+
+3.  执行模块:从Cache中,筛选出与条件匹配的结果。根据查询目标的不同,分为Ruler、Fetcher和Match而三种执行模式,具体逻辑在后文描述。
+
+4.  评估模块:负责条件执行代价的计算和历史执行状况的统计。
+
+### **查询条件定义(ContextSearchCondition)**
+
+一个查询条件,规定了该如何从一个ContextKeyValue集合中,筛选出符合条件的那一部分。查询条件可以通过逻辑运算构成更加复杂的查询条件。
+
+1.  支持ContextType、ContextScope、KeyWord的匹配
+
+    1.  分别对应一个Condition类型
+
+    2.  在Cache中,这些都应该有相应的索引
+
+2.  支持对key的contains/regex匹配模式
+
+    1.  ContainsContextSearchCondition:包含某个字符串
+
+    2.  RegexContextSearchCondition:匹配某个正则表达式
+
+3.  支持or、and和not的逻辑运算
+
+    1.  一元运算UnaryContextSearchCondition:
+
+>   支持单个参数的逻辑运算,比如NotContextSearchCondition
+
+1.  二元运算BinaryContextSearchCondition:
+
+>   支持两个参数的逻辑运算,分别定义为LeftCondition和RightCondition,比如OrContextSearchCondition和AndContextSearchCondition
+
+1.  每个逻辑运算均对应一个上述子类的实现类
+
+2.  该部分的UML类图如下:
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png)
+
+### **查询条件的构建**
+
+1.  支持通过ContextSearchConditionBuilder构建:构建时,如果同时声明多项ContextType、ContextScope、KeyWord、contains/regex的匹配,自动以And逻辑运算连接
+
+2.  支持Condition之间进行逻辑运算,返回新的Condition:And,Or和Not(考虑condition1.or(condition2)的形式,要求Condition顶层接口定义逻辑运算方法)
+
+3.  支持通过每个底层实现类对应的ContextSearchParser从Map构建
+
+### **查询条件的执行**
+
+1.  查询条件的三种作用方式:
+
+    1.  Ruler:从一个Array中筛选出符合条件的ContextKeyValue子Array
+
+    2.  Matcher:判断单个ContextKeyValue是否符合条件
+
+    3.  Fetcher:从ContextCache里筛选出符合条件的ContextKeyValue的Array
+
+2.  每个底层的Condition都有对应的Execution,负责维护相应的Ruler、Matcher、Fetcher。
+
+### **查询入口ContextSearch**
+
+提供search接口,接收Map作为参数,从Cache中筛选出对应的数据。
+
+1.  通过Parser,将Map形式的条件转换为Condition对象
+
+2.  通过Optimizer,获取代价信息,并根据代价信息确定查询的先后顺序
+
+3.  通过对应的Execution,执行相应的Ruler/Fetcher/Matcher逻辑后,得到搜索结果
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png)
+
+### **查询优化**
+
+1.  OptimizedContextSearchCondition维护条件的Cost和Statistics信息:
+
+    1.  Cost信息:由CostCalculator负责判断某个Condition是否能够计算出Cost,如果可以计算,则返回对应的Cost对象
+
+    2.  Statistics信息:开始/结束/执行时间、输入行数、输出行数
+
+2.  实现一个CostContextSearchOptimizer,其optimize方法以Condition的代价为依据,对Condition进行调优,转换为一个OptimizedContextSearchCondition对象。具体逻辑描述如下:
+
+    1.  将一个复杂的Condition,根据逻辑运算的组合,拆解成一个树形结构,每个叶子节点均为一个最基本的简单Condition;每个非叶子节点均为一个逻辑运算。
+
+>   如下图所示的树A,就是一个由ABCDE这五个简单条件,通过各种逻辑运算组合成的一个复杂条件。
+
+![](./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png)
+<center>(树A)</center>
+
+1.  这些Condition的执行,事实上就是深度优先、从左到右遍历这个树。而且根据逻辑运算的交换规律,Condition树中一个节点的子节点的左右顺序可以互换,因此可以穷举出所有可能的执行顺序下的所有可能的树。
+
+>   如下图所示的树B,就是上述树A的另一个可能的顺序,与树A的执行结果完全一致,只是各部分的执行顺序有所调整。
+
+![](./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png)
+<center>(树B)</center>
+
+1.  针对每一个树,从叶子节点开始计算代价,归集到根节点,即为该树的最终代价,最终得出代价最小的那个树,作为最优执行顺序。
+
+>   计算节点代价的规则如下:
+
+1.  针对叶子节点,每个节点有两个属性:代价(Cost)和权重(Weight)。Cost即为CostCalculator计算出的代价,Weight是根据节点执行先后顺序的不同赋予的,当前默认左边为1,右边为0.5,后续看如何调整(赋予权重的原因是,左边的条件在一些情况下已经可以直接决定整个组合逻辑的匹配与否,所以右边的条件并非所有情况下都要执行,实际开销就需要减少一定的比例)
+
+2.  针对非叶子节点,Cost=所有子节点的Cost×Weight的总和;Weight的赋予逻辑与叶子节点一致。
+
+>   以树A和树B为例子,分别计算出这两个树的代价,如下图所示,节点中的数字为Cost\|Weight,假设ABCDE这5个简单条件的Cost为10、100、50、10和100。由此可以得出,树B的代价小于树A,为更优方案。
+
+
+<center class="half">
+    <img src="./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png" width="300"> <img src="./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png" width="300">
+</center>
+
+1.  用CostCalculator衡量简单条件的Cost的思路:
+
+    1.  作用在索引上的条件:根据索引值的分布来确定代价。比如当条件A从Cache中get出来的Array长度是100,条件B为200,那么条件A的代价小于B。
+
+    2.  需要遍历的条件:
+
+        1.  根据条件本身匹配模式给出一个初始Cost:如Regex为100,Contains为10等(具体数值等实现时根据情况调整)
+
+        2.  根据历史查询的效率,在初始Cost的基础上进行不断调整后,得到实时的Cost。单位时间吞吐量
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md
new file mode 100644
index 0000000..7e66f9c
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md
@@ -0,0 +1,55 @@
+## **ContextService架构**
+
+### **水平划分**
+
+从水平上划分为三个模块:Restful,Scheduler,Service
+
+#### Restful职责:
+
+    将请求封装为httpjob提交到Scheduler
+
+#### Scheduler职责:
+
+    通过httpjob的protocol的ServiceName找到相应的服务执行这个job
+
+#### Service职责:
+
+    真正执行请求逻辑的模块,封装ResponseProtocol,并唤醒Restful中wait的线程
+
+### **垂直划分**
+从垂直上划分为4个模块:Listener,History,ContextId,Context:
+
+#### Listener职责:
+
+1.  负责Client端的注册和绑定(写入数据库和在CallbackEngine中进行注册)
+
+2.  心跳接口,通过CallbackEngine返回Array[ListenerCallback]
+
+#### History职责:
+创建和移除history,操作Persistence进行DB持久化
+
+#### ContextId职责:
+主要是对接Persistence进行ContextId的创建,更新移除等操作
+
+#### Context职责:
+
+1.  对于移除,reset等方法,先操作Persistence进行DB持久化,并更新ContextCache
+
+2.  封装查询condition去ContextSearch模块获取相应的ContextKeyValue数据
+
+请求访问步骤如下图:
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png)
+
+## **UML类图** 
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png)
+
+## **Scheduler线程模型**
+
+需要保证Restful的线程池不被填满
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png)
+
+时序图如下:
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png)
+
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md
new file mode 100644
index 0000000..fc64eb4
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md
@@ -0,0 +1,124 @@
+## **背景**
+
+### **什么是上下文Context?**
+
+保持某种操作继续进行的所有必需信息。如:同时看三本书,每本书已翻看的页码就是继续看这本书的上下文。
+
+### **为什么需要CS(Context Service)?**
+
+CS,用于解决一个数据应用开发流程,跨多个系统间的数据和信息共享问题。
+
+例如,B系统需要使用A系统产生的一份数据,通常的做法如下:
+
+1.  B系统调用A系统开发的数据访问接口;
+
+2.  B系统读取A系统写入某个共享存储的数据。
+
+有了CS之后,A和B系统只需要与CS交互,将需要共享的数据和信息写入到CS,需要读取的数据和信息从CS中读出即可,无需外部系统两两开发适配,极大降低了系统间信息共享的调用复杂度和耦合度,使各系统的边界更加清晰。
+
+## **产品范围**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png)
+
+
+### 元数据上下文
+
+元数据上下文定义元数据规范。
+
+元数据上下文依托于数据中间件,主要功能如下:
+
+1.  打通与数据中间件的关系,能拿到所有的用户元数据信息(包括Hive表元数据、线上库表元数据、其他NOSQL如HBase、Kafka等元数据)
+
+2.  所有节点需要访问元数据时,包括已有元数据和应用模板内元数据,都必须经过元数据上下文。元数据上下文记录了应用模板所使用的所有元数据信息。
+
+3.  各节点所产生的新元数据,都必须往元数据上下文注册。
+
+4.  抽出应用模板时,元数据上下文为应用模板抽象(主要是将用到的多个库表做成\${db}.表形式,避免数据权限问题)和打包所有依赖的元数据信息。
+
+元数据上下文是交互式工作流的基础,也是应用模板的基础。设想:Widget定义时,如何知道DataWrangler定义的各指标维度?Qualitis如何校验Widget产生的图报表?
+
+### 数据上下文
+
+数据上下文定义数据规范。
+
+数据上下文依赖于数据中间件和Linkis计算中间件。主要功能如下:
+
+1.  打通数据中间件,拿到所有用户数据信息。
+
+2.  打通计算中间件,拿到所有节点的数据存储信息。
+
+3.  所有节点需要写临时结果时,必须通过数据上下文,由数据上下文统一分配。
+
+4.  所有节点需要访问数据时,必须通过数据上下文。
+
+5.  数据上下文会区分依赖数据和生成数据,在抽出应用模板时,为应用模板抽象和打包所有依赖的数据。
+
+### 资源上下文
+
+资源上下文定义资源规范。
+
+资源上下文主要与Linkis计算中间件交互。主要功能如下:
+
+1.  用户资源文件(如Jar、Zip文件、properties文件等)
+
+2.  用户UDF
+
+3.  用户算法包
+
+4.  用户脚本
+
+### 环境上下文
+
+环境上下文定义环境规范。
+
+主要功能如下:
+
+1.  操作系统
+
+2.  软件,如Hadoop、Spark等
+
+3.  软件包依赖,如Mysql-JDBC。
+
+### 对象上下文
+
+运行时上下文为应用模板(工作流)在定义和执行时,所保留的所有上下文信息。
+
+它用于协助定义工作流/应用模板,在工作流/应用模板执行时提示和完善所有必要信息。
+
+运行时工作流主要是Linkis使用。
+
+
+## **CS架构图**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png)
+
+## **架构说明:**
+
+### 1.  Client
+外部访问CS的入口,Client模块提供HA功能;
+[进入Client架构设计](ContextService_Client.md)
+
+### 2.  Service模块
+提供Restful接口,封装和处理客户端提交的CS请求;
+[进入Service架构设计](ContextService_Service.md)
+
+### 3.  ContextSearch
+上下文查询模块,提供丰富和强大的查询能力,供客户端查找上下文的Key-Value键值对;
+[进入ContextSearch架构设计](ContextService_Search.md)
+
+### 4.  Listener
+CS的监听器模块,提供同步和异步的事件消费能力,具备类似Zookeeper的Key-Value一旦更新,实时通知Client的能力;
+[进入Listener架构设计](ContextService_Listener.md)
+
+### 5.  ContextCache
+上下文的内存缓存模块,提供快速检索上下文的能力和对JVM内存使用的监听和清理能力;
+[进入ContextCache架构设计](ContextService_Cache.md)
+
+### 6.  HighAvailable
+提供CS高可用能力;
+[进入HighAvailable架构设计](ContextService_HighAvailable.md)
+
+### 7.  Persistence
+CS的持久化功能;
+[进入Persistence架构设计](ContextService_Persistence.md)
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/DataSource.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/DataSource.md
new file mode 100644
index 0000000..53b4740
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/DataSource.md
@@ -0,0 +1 @@
+待上传
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/PublicService.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/PublicService.md
new file mode 100644
index 0000000..71dc115
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/PublicService.md
@@ -0,0 +1,31 @@
+
+## **背景**
+
+PublicService公共服务是由configuration、jobhistory、udf、variable等多个子模块组成的综合性服务。Linkis
+1.0在0.9版本的基础上还新增了标签管理。Linkis在用户不同作业执行过程中,不是每次执行都需要去设置一遍参数,很多可以复用的变量,函数,配置都是用户在完成一次设置后,能够被复用起来,当然还可以共享给别的用户使用。
+
+## **架构图**
+
+![](../../Images/Architecture/linkis-publicService-01.png)
+
+## **架构说明**
+
+1. linkis-configuration:对外提供了全局设置和通用设置的查询和保存操作,特别是引擎配置参数
+
+2. linkis-jobhistory:专门用于历史执行任务的存储和查询,用户可以通过jobhistory提供的接口获取历史任务
+    的执行情况。包括日志、状态、执行内容等。同时历史任务还支持了分页查询操作,对于管理员可以查看所有的历史任务,普通用户只能查看自己的历史任务。
+3. Linkis-udf:提供linkis的用户函数管理功能,具体可分为共享函数、个人函数、系统函数,以及函数使用的引擎,用户勾选后会在引擎启动的时候被自动加载。供用户在代码中直接引用和不同的脚本间进行函数复用。
+
+4. Linkis-variable:提供linkis的全局变量管理能力,存储用户定义的全局变量,查询用户定义的全局变量。
+
+5. linkis-instance-label:提供了label server 和label
+    client两个模块,为Engine和EM打标签,提供基于节点的标签增删改查能力。主要功能如下:
+
+-   为一些特定的标签,提供资源管理能力,协助RM在资源管理层面更加精细化
+
+-   为用户提供标签能力。为一些用户打上标签,这样在引擎申请时,会自动加上这些标签判断
+
+-   提供标签解析模块,能将用户的请求,解析成一堆标签。
+
+-   具备节点标签管理的能力,主要用于提供节点的标签CRUD能力,还有标签资源管理用于管理某些标签的资源,标记一个Label的最大资源、最小资源和已使用资源。
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/README.md
new file mode 100644
index 0000000..a980e5b
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/README.md
@@ -0,0 +1,91 @@
+PublicEnhencementService(PS)架构设计
+======================================
+
+PublicEnhancementService(PS):公共增强服务,为其他微服务模块提供统一配置管理、上下文服务、物理库、数据源管理、微服务管理和历史任务查询等功能的模块。
+
+![](../../Images/Architecture/PublicEnhencement架构图.png)
+
+二级模块介绍:
+==============
+
+BML物料库
+---------
+
+是linkis的物料管理系统,主要用来存储用户的各种文件数据,包括用户脚本、资源文件、第三方Jar包等,也可以存储引擎运行时需要使用到的类库。
+
+| 核心类          | 核心功能                           |
+|-----------------|------------------------------------|
+| UploadService   | 提供资源上传服务                   |
+| DownloadService | 提供资源下载服务                   |
+| ResourceManager | 提供了上传、下载资源的统一管理入口 |
+| VersionManager  | 提供了资源版本标记和版本管理功能   |
+| ProjectManager  | 提供了项目级的资源管控能力         |
+
+Configuration统一配置管理
+-------------------------
+
+Configuration提供了“用户—引擎—应用”三级配置管理方案,实现了为用户提供配置各种接入应用下自定义引擎参数的功能。
+
+| 核心类               | 核心功能                       |
+|----------------------|--------------------------------|
+| CategoryService      | 提供了应用和引擎目录的管理服务 |
+| ConfigurationService | 提供了用户配置统一管理服务     |
+
+ContextService上下文服务
+------------------------
+
+ContextService用于解决一个数据应用开发流程,跨多个系统间的数据和信息共享问题。
+
+| 核心类              | 核心功能                                 |
+|---------------------|------------------------------------------|
+| ContextCacheService | 提供对上下文信息缓存服务                 |
+| ContextClient       | 提供其他微服务和CSServer组进行交互的能力 |
+| ContextHAManager    | 为ContextService提供高可用能力           |
+| ListenerManager     | 提供消息总线的能力                       |
+| ContextSearch       | 提供了查询入口                           |
+| ContextService      | 实现了上下文服务总体的执行逻辑           |
+
+Datasource数据源管理
+--------------------
+
+Datasource为其他微服务提供不同数据源连接的能力。
+
+| 核心类            | 核心功能                 |
+|-------------------|--------------------------|
+| datasource-server | 提供不同数据源连接的能力 |
+
+InstanceLabel微服务管理
+-----------------------
+
+InstanceLabel为其他接入linkis的微服务提供注册和标签功能。
+
+| 核心类          | 核心功能                       |
+|-----------------|--------------------------------|
+| InsLabelService | 提供微服务注册和标签管理的功能 |
+
+Jobhistory历史任务管理
+----------------------
+
+Jobhistory为用户提供了linkis历史任务查询、进度、日志展示的相关功能,为管理员提供统一历史任务视图。
+
+| 核心类                 | 核心功能             |
+|------------------------|----------------------|
+| JobHistoryQueryService | 提供历史任务查询服务 |
+
+Variable用户自定义变量管理
+--------------------------
+
+Variable为用户提供自定义变量存储和使用的相关功能。
+
+| 核心类          | 核心功能                           |
+|-----------------|------------------------------------|
+| VariableService | 提供自定义变量存储和使用的相关功能 |
+
+UDF用户自定义函数管理
+---------------------
+
+UDF为用户提供自定义函数的功能,用户可以在在编写代码时自行引入。
+
+| 核心类     | 核心功能               |
+|------------|------------------------|
+| UDFService | 提供用户自定义函数服务 |
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/README.md
new file mode 100644
index 0000000..b28cec0
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/README.md
@@ -0,0 +1,24 @@
+## 1. 文档结构
+Linkis 1.0 将所有微服务总体划分为三大类:公共增强服务、计算治理服务、微服务治理服务。如下图所示为Linkis 1.0 的架构图。
+
+![Linkis1.0架构图](./../Images/Architecture/Linkis1.0-architecture.png)
+
+
+各大类的具体职责如下:
+
+1. 公共增强服务为 Linkis 0.X 已经提供的物料库服务、上下文服务、数据源服务和公共服务等;
+    
+2. 微服务治理服务为 Linkis 0.X 已经提供的 Spring Cloud Gateway、Eureka 和 Open Feign,同时 Linkis1.0 还会提供对 Nacos 的支持;
+    
+3. 计算治理服务是 Linkis 1.0 的核心重点,从 提交 —> 准备 —> 执行三个阶段,来全面升级 Linkis 对 用户任务的执行管控能力。
+
+以下是 Linkis1.0 架构文档的目录列表:
+
+1. Linkis1.0在架构上的特点,请阅读[Linkis1.0与Linkis0.x的区别](Linkis1.0与Linkis0.X的区别简述.md)。
+
+2. Linkis1.0公共增强服务相关文档,请阅读[公共增强服务](Public_Enhancement_Services/README.md)。
+
+3. Linkis1.0微服务治理相关文档,请阅读[微服务治理](Microservice_Governance_Services/README.md)。
+
+4. Linkis1.0提出的计算治理服务相关文档,请阅读 [计算治理服务](Computation_Governance_Services/README.md)。
+
diff --git a/Linkis-Doc-master/zh_CN/Deployment_Documents/Cluster_Deployment.md b/Linkis-Doc-master/zh_CN/Deployment_Documents/Cluster_Deployment.md
new file mode 100644
index 0000000..c863777
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Deployment_Documents/Cluster_Deployment.md
@@ -0,0 +1,100 @@
+分布式部署方案介绍
+==================
+
+Linkis的单机部署方式简单,但是不能用于生产环境,因为过多的进程在同一个服务器上会让服务器压力过大。 部署方案的选择,和公司的用户规模、用户使用习惯、集群同时使用人数都有关,一般来说,我们会以使用Linkis的同时使用人数和用户对执行引擎的偏好来做依据进行部署方式的选择。
+
+1、多节点部署方式参考
+---------------------
+
+Linkis1.0仍然保持着基于SpringCloud的微服务架构,其中每个微服务都支持多活的部署方案,当然不同的微服务在系统中承担的角色不一样,有的微服务调用频率很高,更可能会处于高负荷的情况,**在安装EngineConnManager的机器上,由于会启动用户的引擎进程,机器的内存负载会比较高,其他类型的微服务对机器的负载则相对不会很高,**对于这类微服务我们建议启动多个进行分布式部署,Linkis动态使用的总资源可以按照如下方式计算。
+
+**EngineConnManager**使用总资源 = 总内存 + 总核数 =
+
+**同时在线人数 \* (所有类型的引擎占用内存) \*单用户最高并发数+ 同时在线人数 \*
+(所有类型的引擎占用内存) \*单用户最高并发数**
+
+例如只使用spark、hive、python引擎且单用户最高并发数为1的情况下,同时使用人数50人,Spark的Driver内存1G,Hive
+Client内存1G,python client 1G,每个引擎都使用1个核,那么就是 50 \*(1+1+1)G \*
+1 + 50 \*(1+1+1)核\*1 = 150G 内存 + 150 CPU核数。
+
+分布式部署时微服务本身占用的内存可以按照每个2G计算,对于使用人数较多的情况下建议调大ps-publicservice的内存至6G,同时建议预留10G内存作为buffer。
+
+以下配置假设**每个用户同时启动两个引擎为例**,**对于64G内存的机器**,参考配置如下:
+
+-   同时在线人数10-50
+
+>   **服务器配置推荐**4台服务器,分别命名为S1,S2,S3,S4
+
+| Service              | Host name | Remark           |
+|----------------------|-----------|------------------|
+| cg-engineconnmanager | S1、S2    | 每台机器单独部署 |
+| Other services       | S3、S4    | Eureka高可用部署 |
+
+-   同时在线人数50-100
+
... 8746 lines suppressed ...

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 43/50: Merge pull request #4 from casionone/asf-staging

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 5d4504511a26dc8a53771341fba9904856b0c70b
Merge: 529ece4 c73b169
Author: johnnywang <wp...@gmail.com>
AuthorDate: Thu Oct 21 17:50:36 2021 +0800

    Merge pull request #4 from casionone/asf-staging
    
    add asf.yaml file for asf-staging

 .asf.yaml |  28 +++++++++
 LICENSE   | 201 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 229 insertions(+)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 50/50: Merge pull request #9 from lucaszhu2zgf/asf-staging

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 535fad6c7658270856c8333cabc4b7f59afc913d
Merge: 76ffb1f 4205780
Author: johnnywang <wp...@gmail.com>
AuthorDate: Thu Oct 28 20:06:58 2021 +0800

    Merge pull request #9 from lucaszhu2zgf/asf-staging
    
    bugfix for introduction page

 assets/404.f24f37c0.js                                      |   1 -
 ...-manager-03.5aaff6ed.png => app-manager-01.5aaff6ed.png} | Bin
 assets/app_manager.bed25273.js                              |   2 +-
 assets/{download.4f121175.js => download.65cfe27b.js}       |   2 +-
 assets/{event.b677bf34.js => event.c4950b6a.js}             |   2 +-
 assets/index.83dab580.js                                    |   1 +
 assets/index.c319b82e.js                                    |   1 -
 assets/{index.ba4cbe23.js => index.dac2c111.js}             |   2 +-
 assets/{linkis.cdbb993f.js => linkis.513065ec.js}           |   2 +-
 assets/manager.6973d707.js                                  |   2 +-
 index.html                                                  |   2 +-
 11 files changed, 8 insertions(+), 9 deletions(-)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 04/50: ADD: 增加首页相关的页面

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 6dcddb62e90e03fbee494df1b040a4cc46231360
Author: lucaszhu <lu...@webank.com>
AuthorDate: Wed Sep 29 15:36:05 2021 +0800

    ADD: 增加首页相关的页面
---
 index.html             |  2 +-
 src/App.vue            |  4 +--
 src/main.js            |  4 +++
 src/pages/blog.vue     |  4 ++-
 src/pages/docs.vue     |  3 +++
 src/pages/download.vue |  3 +++
 src/pages/faq.vue      |  3 +++
 src/pages/home.vue     | 73 ++++++++++++++++++++++++++++++++++++++++++++++++++
 src/pages/team.vue     |  3 +++
 src/router.js          | 12 ++++-----
 10 files changed, 101 insertions(+), 10 deletions(-)

diff --git a/index.html b/index.html
index 030a6ff..d8b1bf3 100644
--- a/index.html
+++ b/index.html
@@ -4,7 +4,7 @@
     <meta charset="UTF-8" />
     <link rel="icon" href="/favicon.ico" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <title>Vite App</title>
+    <title>Apache Linkis</title>
   </head>
   <body>
     <div id="app"></div>
diff --git a/src/App.vue b/src/App.vue
index 135a713..f8720df 100644
--- a/src/App.vue
+++ b/src/App.vue
@@ -22,7 +22,7 @@
       </div>
     </div>
   </nav>
-  <router-view/>
+  <router-view></router-view>
   <footer class="footer">
     <div class="ctn-block">
       <div class="footer-links-row">
@@ -89,7 +89,7 @@
       transition: all ease .2s;
       cursor: pointer;
       &:hover,
-      &.router-link-active{
+      &.router-link-exact-active{
         color: @active-color;
         border-color: @active-color;
       }
diff --git a/src/main.js b/src/main.js
index 73108e3..c8d6c67 100644
--- a/src/main.js
+++ b/src/main.js
@@ -8,6 +8,10 @@ const router = createRouter({
   routes,
 })
 
+router.resolve({
+  name: 'home'
+}).href
+
 const app = createApp(App);
 app.use(router);
 
diff --git a/src/pages/blog.vue b/src/pages/blog.vue
index 8b13789..08ba385 100644
--- a/src/pages/blog.vue
+++ b/src/pages/blog.vue
@@ -1 +1,3 @@
-
+<template>
+  <div>blog</div>
+</template>
diff --git a/src/pages/docs.vue b/src/pages/docs.vue
index e69de29..b33becc 100644
--- a/src/pages/docs.vue
+++ b/src/pages/docs.vue
@@ -0,0 +1,3 @@
+<template>
+  <div>docs</div>
+</template>
diff --git a/src/pages/download.vue b/src/pages/download.vue
index e69de29..35a96c7 100644
--- a/src/pages/download.vue
+++ b/src/pages/download.vue
@@ -0,0 +1,3 @@
+<template>
+  <div>download</div>
+</template>
diff --git a/src/pages/faq.vue b/src/pages/faq.vue
index 8b13789..d5741fa 100644
--- a/src/pages/faq.vue
+++ b/src/pages/faq.vue
@@ -1 +1,4 @@
 
+<template>
+  <div>faq</div>
+</template>
diff --git a/src/pages/home.vue b/src/pages/home.vue
index e69de29..c722258 100644
--- a/src/pages/home.vue
+++ b/src/pages/home.vue
@@ -0,0 +1,73 @@
+<template>
+  <div class="ctn-block home-page text-center">
+    <div class="banner">
+      <h1 class="home-title"><span class="apache">Apache</span> <span class="linkis">Linkis</span> <span class="badge">Incubating</span></h1>
+      <p class="home-desc">Decouple the upper applications and the underlying data<br>engines by building a middleware layer.</p>
+      <div class="botton-row">
+        <a href="/" class="corner-botton black">Get Started</a>
+        <a href="/" class="corner-botton white">GitHub</a>
+      </div>
+    </div>
+  </div>
+</template>
+<style lang="less" scoped>
+  @import url('/src/style/base.less');
+
+  .home-page {
+    .banner {
+      padding: 168px 0;
+      .home-title {
+        margin-bottom: 20px;
+        font-size: 60px;
+        line-height: 84px;
+
+        .apache {
+          color: @enhance-color;
+        }
+
+        .linkis {
+          color: #1A529C;
+        }
+
+        .badge {
+          font-size: 24px;
+          font-weight: 400;
+        }
+      }
+
+      .home-desc {
+        margin-bottom: 80px;
+        font-size: 24px;
+        color: @enhance-color;
+        text-align: center;
+        line-height: 26px;
+        font-weight: 400;
+      }
+
+      .botton-row{
+        display: flex;
+        justify-content: center;
+        .corner-botton{
+          margin-right: 22px;
+          padding: 0 40px;
+          height: 46px;
+          line-height: 46px;
+          border-radius: 25px;
+          &:last-child{
+            margin-right: 0;
+          }
+          &.black{
+            color: #fff;
+            background: @enhance-color;
+            border: 1px solid  @enhance-color;
+          }
+          &.white{
+            color: @enhance-color;
+            background: #fff;
+            border: 1px solid @enhance-color;
+          }
+        }
+      }
+    }
+  }
+</style>
\ No newline at end of file
diff --git a/src/pages/team.vue b/src/pages/team.vue
index e69de29..e98fedf 100644
--- a/src/pages/team.vue
+++ b/src/pages/team.vue
@@ -0,0 +1,3 @@
+<template>
+  <div>team</div>
+</template>
diff --git a/src/router.js b/src/router.js
index 49d76f7..d7f0ae9 100644
--- a/src/router.js
+++ b/src/router.js
@@ -3,12 +3,12 @@ const routes = [
     path: '/',
     component: () => import(/* webpackChunkName: "group-app" */ './app.vue'),
     children: [
-      { path: '', component: () => import(/* webpackChunkName: "group-home" */ './pages/home.vue') },
-      { path: '/docs', component: () => import(/* webpackChunkName: "group-docs" */ './pages/docs.vue') },
-      { path: '/faq', component: () => import(/* webpackChunkName: "group-faq" */ './pages/faq.vue') },
-      { path: '/download', component: () => import(/* webpackChunkName: "group-download" */ './pages/download.vue') },
-      { path: '/blog', component: () => import(/* webpackChunkName: "group-blog" */ './pages/blog.vue') },
-      { path: '/team', component: () => import(/* webpackChunkName: "group-team" */ './pages/team.vue') },
+      { path: '', name: 'home', component: () => import(/* webpackChunkName: "group-home" */ './pages/home.vue') },
+      { path: 'docs', name: 'docs', component: () => import(/* webpackChunkName: "group-docs" */ './pages/docs.vue') },
+      { path: 'faq', name: 'faq', component: () => import(/* webpackChunkName: "group-faq" */ './pages/faq.vue') },
+      { path: 'download', name: 'download', component: () => import(/* webpackChunkName: "group-download" */ './pages/download.vue') },
+      { path: 'blog', name: 'blog', component: () => import(/* webpackChunkName: "group-blog" */ './pages/blog.vue') },
+      { path: 'team', name: 'team', component: () => import(/* webpackChunkName: "group-team" */ './pages/team.vue') },
     ]
   }
 ]

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 02/50: ADD: 增加页面基础

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 5da7d44fd5a64b65295ba0082d51ec6b8166554b
Author: lucaszhu <lu...@webank.com>
AuthorDate: Wed Sep 29 10:51:47 2021 +0800

    ADD: 增加页面基础
---
 package-lock.json       | 174 ++++++++++++++++++++++++++++++++++++++++++++++++
 package.json            |   4 +-
 src/App.vue             |  68 ++++++++++++++++---
 src/main.js             |  12 +++-
 src/pages/blog.vue      |   1 +
 src/pages/docs.vue      |   0
 src/pages/download.vue  |   0
 src/pages/faq.vue       |   1 +
 src/pages/home.vue      |   0
 src/pages/team.vue      |   0
 src/router.js           |  16 +++++
 src/style/base.less     |  43 ++++++++++++
 src/style/virables.less |   2 +
 13 files changed, 309 insertions(+), 12 deletions(-)

diff --git a/package-lock.json b/package-lock.json
index 194ae65..d195bbd 100644
--- a/package-lock.json
+++ b/package-lock.json
@@ -61,6 +61,11 @@
         "@vue/shared": "3.2.19"
       }
     },
+    "@vue/devtools-api": {
+      "version": "6.0.0-beta.18",
+      "resolved": "http://10.107.103.115:8001/@vue/devtools-api/download/@vue/devtools-api-6.0.0-beta.18.tgz",
+      "integrity": "sha1-hMCv+SiaVylMuXSQgR9p6KCmf4o="
+    },
     "@vue/reactivity": {
       "version": "3.2.19",
       "resolved": "http://10.107.103.115:8001/@vue/reactivity/download/@vue/reactivity-3.2.19.tgz",
@@ -114,11 +119,40 @@
       "resolved": "http://10.107.103.115:8001/@vue/shared/download/@vue/shared-3.2.19.tgz",
       "integrity": "sha1-ER7D2hgzfYYnREaYTEmSWxsrLdc="
     },
+    "copy-anything": {
+      "version": "2.0.3",
+      "resolved": "http://10.107.103.115:8001/copy-anything/download/copy-anything-2.0.3.tgz",
+      "integrity": "sha1-hCQHugJGaw34RIGbvjuuu+XUXYc=",
+      "dev": true,
+      "requires": {
+        "is-what": "^3.12.0"
+      }
+    },
     "csstype": {
       "version": "2.6.18",
       "resolved": "http://10.107.103.115:8001/csstype/download/csstype-2.6.18.tgz",
       "integrity": "sha1-mAqLUwhfNK8xNBCvBk8r0kF4Qhg="
     },
+    "debug": {
+      "version": "3.2.7",
+      "resolved": "http://10.107.103.115:8001/debug/download/debug-3.2.7.tgz",
+      "integrity": "sha1-clgLfpFF+zm2Z2+cXl+xALk0F5o=",
+      "dev": true,
+      "optional": true,
+      "requires": {
+        "ms": "^2.1.1"
+      }
+    },
+    "errno": {
+      "version": "0.1.8",
+      "resolved": "http://10.107.103.115:8001/errno/download/errno-0.1.8.tgz",
+      "integrity": "sha1-i7Ppx9Rjvkl2/4iPdrSAnrwugR8=",
+      "dev": true,
+      "optional": true,
+      "requires": {
+        "prr": "~1.0.1"
+      }
+    },
     "esbuild": {
       "version": "0.12.29",
       "resolved": "http://10.107.103.115:8001/esbuild/download/esbuild-0.12.29.tgz",
@@ -143,6 +177,13 @@
       "integrity": "sha1-pWiZ0+o8m6uHS7l3O3xe3pL0iV0=",
       "dev": true
     },
+    "graceful-fs": {
+      "version": "4.2.8",
+      "resolved": "http://10.107.103.115:8001/graceful-fs/download/graceful-fs-4.2.8.tgz",
+      "integrity": "sha1-5BK40z9eAGWTy9PO5t+fLOu+gCo=",
+      "dev": true,
+      "optional": true
+    },
     "has": {
       "version": "1.0.3",
       "resolved": "http://10.107.103.115:8001/has/download/has-1.0.3.tgz",
@@ -152,6 +193,23 @@
         "function-bind": "^1.1.1"
       }
     },
+    "iconv-lite": {
+      "version": "0.4.24",
+      "resolved": "http://10.107.103.115:8001/iconv-lite/download/iconv-lite-0.4.24.tgz",
+      "integrity": "sha1-ICK0sl+93CHS9SSXSkdKr+czkIs=",
+      "dev": true,
+      "optional": true,
+      "requires": {
+        "safer-buffer": ">= 2.1.2 < 3"
+      }
+    },
+    "image-size": {
+      "version": "0.5.5",
+      "resolved": "http://10.107.103.115:8001/image-size/download/image-size-0.5.5.tgz",
+      "integrity": "sha1-Cd/Uq50g4p6xw+gLiZA3jfnjy5w=",
+      "dev": true,
+      "optional": true
+    },
     "is-core-module": {
       "version": "2.6.0",
       "resolved": "http://10.107.103.115:8001/is-core-module/download/is-core-module-2.6.0.tgz",
@@ -161,6 +219,30 @@
         "has": "^1.0.3"
       }
     },
+    "is-what": {
+      "version": "3.14.1",
+      "resolved": "http://10.107.103.115:8001/is-what/download/is-what-3.14.1.tgz",
+      "integrity": "sha1-4SIvRt3ahd6tD9HJ3xMXYOd3VcE=",
+      "dev": true
+    },
+    "less": {
+      "version": "4.1.1",
+      "resolved": "http://10.107.103.115:8001/less/download/less-4.1.1.tgz",
+      "integrity": "sha1-Fb8lOpk5eR3GkIiMP/Qk8+bH7bo=",
+      "dev": true,
+      "requires": {
+        "copy-anything": "^2.0.1",
+        "errno": "^0.1.1",
+        "graceful-fs": "^4.1.2",
+        "image-size": "~0.5.0",
+        "make-dir": "^2.1.0",
+        "mime": "^1.4.1",
+        "needle": "^2.5.2",
+        "parse-node-version": "^1.0.1",
+        "source-map": "~0.6.0",
+        "tslib": "^1.10.0"
+      }
+    },
     "magic-string": {
       "version": "0.25.7",
       "resolved": "http://10.107.103.115:8001/magic-string/download/magic-string-0.25.7.tgz",
@@ -169,6 +251,31 @@
         "sourcemap-codec": "^1.4.4"
       }
     },
+    "make-dir": {
+      "version": "2.1.0",
+      "resolved": "http://10.107.103.115:8001/make-dir/download/make-dir-2.1.0.tgz",
+      "integrity": "sha1-XwMQ4YuL6JjMBwCSlaMK5B6R5vU=",
+      "dev": true,
+      "optional": true,
+      "requires": {
+        "pify": "^4.0.1",
+        "semver": "^5.6.0"
+      }
+    },
+    "mime": {
+      "version": "1.6.0",
+      "resolved": "http://10.107.103.115:8001/mime/download/mime-1.6.0.tgz",
+      "integrity": "sha1-Ms2eXGRVO9WNGaVor0Uqz/BJgbE=",
+      "dev": true,
+      "optional": true
+    },
+    "ms": {
+      "version": "2.1.3",
+      "resolved": "http://10.107.103.115:8001/ms/download/ms-2.1.3.tgz",
+      "integrity": "sha1-V0yBOM4dK1hh8LRFedut1gxmFbI=",
+      "dev": true,
+      "optional": true
+    },
     "nanocolors": {
       "version": "0.2.9",
       "resolved": "http://10.107.103.115:8001/nanocolors/download/nanocolors-0.2.9.tgz",
@@ -179,12 +286,37 @@
       "resolved": "http://10.107.103.115:8001/nanoid/download/nanoid-3.1.28.tgz",
       "integrity": "sha1-PAG6wUy2xWgFaQFMxlovJkJMa9Q="
     },
+    "needle": {
+      "version": "2.9.1",
+      "resolved": "http://10.107.103.115:8001/needle/download/needle-2.9.1.tgz",
+      "integrity": "sha1-ItHf++NJDCuD4wH3cJtnNs2PJoQ=",
+      "dev": true,
+      "optional": true,
+      "requires": {
+        "debug": "^3.2.6",
+        "iconv-lite": "^0.4.4",
+        "sax": "^1.2.4"
+      }
+    },
+    "parse-node-version": {
+      "version": "1.0.1",
+      "resolved": "http://10.107.103.115:8001/parse-node-version/download/parse-node-version-1.0.1.tgz",
+      "integrity": "sha1-4rXb7eAOf6m8NjYH9TMn6LBzGJs=",
+      "dev": true
+    },
     "path-parse": {
       "version": "1.0.7",
       "resolved": "http://10.107.103.115:8001/path-parse/download/path-parse-1.0.7.tgz",
       "integrity": "sha1-+8EUtgykKzDZ2vWFjkvWi77bZzU=",
       "dev": true
     },
+    "pify": {
+      "version": "4.0.1",
+      "resolved": "http://10.107.103.115:8001/pify/download/pify-4.0.1.tgz",
+      "integrity": "sha1-SyzSXFDVmHNcUCkiJP2MbfQeMjE=",
+      "dev": true,
+      "optional": true
+    },
     "postcss": {
       "version": "8.3.8",
       "resolved": "http://10.107.103.115:8001/postcss/download/postcss-8.3.8.tgz",
@@ -195,6 +327,13 @@
         "source-map-js": "^0.6.2"
       }
     },
+    "prr": {
+      "version": "1.0.1",
+      "resolved": "http://10.107.103.115:8001/prr/download/prr-1.0.1.tgz",
+      "integrity": "sha1-0/wRS6BplaRexok/SEzrHXj19HY=",
+      "dev": true,
+      "optional": true
+    },
     "resolve": {
       "version": "1.20.0",
       "resolved": "http://10.107.103.115:8001/resolve/download/resolve-1.20.0.tgz",
@@ -214,6 +353,27 @@
         "fsevents": "~2.3.2"
       }
     },
+    "safer-buffer": {
+      "version": "2.1.2",
+      "resolved": "http://10.107.103.115:8001/safer-buffer/download/safer-buffer-2.1.2.tgz",
+      "integrity": "sha1-RPoWGwGHuVSd2Eu5GAL5vYOFzWo=",
+      "dev": true,
+      "optional": true
+    },
+    "sax": {
+      "version": "1.2.4",
+      "resolved": "http://10.107.103.115:8001/sax/download/sax-1.2.4.tgz",
+      "integrity": "sha1-KBYjTiN4vdxOU1T6tcqold9xANk=",
+      "dev": true,
+      "optional": true
+    },
+    "semver": {
+      "version": "5.7.1",
+      "resolved": "http://10.107.103.115:8001/semver/download/semver-5.7.1.tgz",
+      "integrity": "sha1-qVT5Ma66UI0we78Gnv8MAclhFvc=",
+      "dev": true,
+      "optional": true
+    },
     "source-map": {
       "version": "0.6.1",
       "resolved": "http://10.107.103.115:8001/source-map/download/source-map-0.6.1.tgz",
@@ -229,6 +389,12 @@
       "resolved": "http://10.107.103.115:8001/sourcemap-codec/download/sourcemap-codec-1.4.8.tgz",
       "integrity": "sha1-6oBL2UhXQC5pktBaOO8a41qatMQ="
     },
+    "tslib": {
+      "version": "1.14.1",
+      "resolved": "http://10.107.103.115:8001/tslib/download/tslib-1.14.1.tgz",
+      "integrity": "sha1-zy04vcNKE0vK8QkcQfZhni9nLQA=",
+      "dev": true
+    },
     "vite": {
       "version": "2.5.10",
       "resolved": "http://10.107.103.115:8001/vite/download/vite-2.5.10.tgz",
@@ -253,6 +419,14 @@
         "@vue/server-renderer": "3.2.19",
         "@vue/shared": "3.2.19"
       }
+    },
+    "vue-router": {
+      "version": "4.0.11",
+      "resolved": "http://10.107.103.115:8001/vue-router/download/vue-router-4.0.11.tgz",
+      "integrity": "sha1-zWSaCUHGNSgXY6IJZbWZZD3caO0=",
+      "requires": {
+        "@vue/devtools-api": "^6.0.0-beta.14"
+      }
     }
   }
 }
diff --git a/package.json b/package.json
index 5dd57e9..8a18f20 100644
--- a/package.json
+++ b/package.json
@@ -7,10 +7,12 @@
     "serve": "vite preview"
   },
   "dependencies": {
-    "vue": "^3.2.13"
+    "vue": "^3.2.13",
+    "vue-router": "^4.0.11"
   },
   "devDependencies": {
     "@vitejs/plugin-vue": "^1.9.0",
+    "less": "^4.1.1",
     "vite": "^2.5.10"
   }
 }
diff --git a/src/App.vue b/src/App.vue
index 7422330..ca52ac6 100644
--- a/src/App.vue
+++ b/src/App.vue
@@ -5,17 +5,65 @@ import HelloWorld from './components/HelloWorld.vue'
 </script>
 
 <template>
-  <img alt="Vue logo" src="./assets/logo.png" />
-  <HelloWorld msg="Hello Vue 3 + Vite" />
+  <div class="nav">
+    <div class="ctn-block">
+      <div class="nav-logo">
+        Apache Linkis
+      </div>
+      <span class="nav-logo-badge">Incubating</span>
+      <div class="menu-list">
+        <router-link class="menu-item" to="/">Home</router-link>
+        <router-link class="menu-item" to="/docs">Docs</router-link>
+        <router-link class="menu-item" to="/faq">FAQ</router-link>
+        <router-link class="menu-item" to="/download">Download</router-link>
+        <router-link class="menu-item" to="/blog">Blog</router-link>
+        <router-link class="menu-item" to="/team">Team</router-link>
+        <div class="menu-item">Language</div>
+      </div>
+    </div>
+  </div>
 </template>
 
-<style>
-#app {
-  font-family: Avenir, Helvetica, Arial, sans-serif;
-  -webkit-font-smoothing: antialiased;
-  -moz-osx-font-smoothing: grayscale;
-  text-align: center;
-  color: #2c3e50;
-  margin-top: 60px;
+<style lang="less">
+@import url('/src/style/base.less');
+.nav{
+  font-size: 16px;
+  box-shadow: 0 2px 4px rgba(15,18,34,0.2);
+  color: @enhance-color;
+  .ctn-block{
+    display: flex;
+    align-items: center;
+  }
+  .nav-logo{
+    line-height: 54px;
+    font-weight: 500;
+  }
+  .nav-logo-badge{
+    display: inline-block;
+    margin-left: 4px;
+    padding: 0 8px;
+    line-height: 24px;
+    background: #E8E8E8;
+    border-radius: 4px;
+    font-size: 12px;
+    font-weight: 400;
+  }
+  .menu-list{
+    flex: 1;
+    display: flex;
+    .menu-item{
+      margin-left: 16px;
+      margin-right: 16px;
+      line-height: 52px;
+      border-bottom: 2px solid transparent;
+      transition: all ease .2s;
+      cursor: pointer;
+      &:hover,
+      &.router-link-active{
+        color: @active-color;
+        border-color: @active-color;
+      }
+    }
+  }
 }
 </style>
diff --git a/src/main.js b/src/main.js
index 01433bc..73108e3 100644
--- a/src/main.js
+++ b/src/main.js
@@ -1,4 +1,14 @@
 import { createApp } from 'vue'
+import { createRouter, createWebHashHistory } from 'vue-router'
+import routes from './router';
 import App from './App.vue'
 
-createApp(App).mount('#app')
+const router = createRouter({
+  history: createWebHashHistory(),
+  routes,
+})
+
+const app = createApp(App);
+app.use(router);
+
+app.mount('#app')
diff --git a/src/pages/blog.vue b/src/pages/blog.vue
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/src/pages/blog.vue
@@ -0,0 +1 @@
+
diff --git a/src/pages/docs.vue b/src/pages/docs.vue
new file mode 100644
index 0000000..e69de29
diff --git a/src/pages/download.vue b/src/pages/download.vue
new file mode 100644
index 0000000..e69de29
diff --git a/src/pages/faq.vue b/src/pages/faq.vue
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/src/pages/faq.vue
@@ -0,0 +1 @@
+
diff --git a/src/pages/home.vue b/src/pages/home.vue
new file mode 100644
index 0000000..e69de29
diff --git a/src/pages/team.vue b/src/pages/team.vue
new file mode 100644
index 0000000..e69de29
diff --git a/src/router.js b/src/router.js
new file mode 100644
index 0000000..49d76f7
--- /dev/null
+++ b/src/router.js
@@ -0,0 +1,16 @@
+const routes = [
+  {
+    path: '/',
+    component: () => import(/* webpackChunkName: "group-app" */ './app.vue'),
+    children: [
+      { path: '', component: () => import(/* webpackChunkName: "group-home" */ './pages/home.vue') },
+      { path: '/docs', component: () => import(/* webpackChunkName: "group-docs" */ './pages/docs.vue') },
+      { path: '/faq', component: () => import(/* webpackChunkName: "group-faq" */ './pages/faq.vue') },
+      { path: '/download', component: () => import(/* webpackChunkName: "group-download" */ './pages/download.vue') },
+      { path: '/blog', component: () => import(/* webpackChunkName: "group-blog" */ './pages/blog.vue') },
+      { path: '/team', component: () => import(/* webpackChunkName: "group-team" */ './pages/team.vue') },
+    ]
+  }
+]
+
+export default routes;
\ No newline at end of file
diff --git a/src/style/base.less b/src/style/base.less
new file mode 100644
index 0000000..ca48f5c
--- /dev/null
+++ b/src/style/base.less
@@ -0,0 +1,43 @@
+@import './virables.less';
+
+* {
+  box-sizing: border-box;
+}
+
+body,
+ul,
+li,
+ol,
+h1,
+h2,
+h3,
+h4,
+h5,
+h6,
+p {
+  margin: 0;
+  padding: 0;
+}
+
+body {
+  font-size: 14px;
+  color: #4A4A4A;
+  line-height: 26px;
+  background: #ffffff;
+}
+
+ul,
+li,
+ol {
+  list-style: none;
+}
+
+a {
+  text-decoration: none;
+}
+
+.ctn-block {
+  width: 1200px;
+  padding: 0 20px;
+  margin: 0 auto;
+}
\ No newline at end of file
diff --git a/src/style/virables.less b/src/style/virables.less
new file mode 100644
index 0000000..2929f7b
--- /dev/null
+++ b/src/style/virables.less
@@ -0,0 +1,2 @@
+@active-color: #1A529C;
+@enhance-color: #0F1222;
\ No newline at end of file

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 14/50: FIX: 调整文档思路

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit b644c2daee9a446065c58fd7979324f4d05a7a2c
Author: lucaszhu <lu...@webank.com>
AuthorDate: Mon Oct 11 15:51:35 2021 +0800

    FIX: 调整文档思路
---
 src/App.vue                                 |  2 +-
 src/docs/deploy/distributed_en.md           |  1 +
 src/docs/deploy/distributed_zh.md           |  1 +
 src/docs/deploy/engins_en.md                |  1 +
 src/docs/deploy/engins_zh.md                |  1 +
 src/docs/deploy/linkis_en.md                |  1 +
 src/docs/{deploy.md => deploy/linkis_zh.md} |  0
 src/docs/deploy/main_en.md                  |  1 +
 src/docs/deploy/main_zh.md                  |  1 +
 src/docs/deploy/structure_en.md             |  1 +
 src/docs/deploy/structure_zh.md             |  1 +
 src/pages/docs/deploy/distributed.vue       | 13 +++++++++++++
 src/pages/docs/deploy/engins.vue            | 13 +++++++++++++
 src/pages/docs/deploy/linkis.vue            | 13 +++++++++++++
 src/pages/docs/deploy/main.vue              | 13 +++++++++++++
 src/pages/docs/deploy/structure.vue         | 13 +++++++++++++
 src/pages/{docs.vue => docs/index.vue}      | 25 ++++++++++++-------------
 src/router.js                               | 23 ++++++++++++++++++++++-
 18 files changed, 109 insertions(+), 15 deletions(-)

diff --git a/src/App.vue b/src/App.vue
index 29efbc4..ba07b2a 100644
--- a/src/App.vue
+++ b/src/App.vue
@@ -23,7 +23,7 @@ const switchLang = (lang) => {
       <span class="nav-logo-badge">Incubating</span>
       <div class="menu-list">
         <router-link class="menu-item" to="/"><span class="label">Home</span></router-link>
-        <router-link class="menu-item" to="/docs"><span class="label">Docs</span></router-link>
+        <router-link class="menu-item" to="/docs/deploy"><span class="label">Docs</span></router-link>
         <router-link class="menu-item" to="/faq"><span class="label">FAQ</span></router-link>
         <router-link class="menu-item" to="/download"><span class="label">Download</span></router-link>
         <router-link class="menu-item" to="/blog"><span class="label">Blog</span></router-link>
diff --git a/src/docs/deploy/distributed_en.md b/src/docs/deploy/distributed_en.md
new file mode 100644
index 0000000..c22103d
--- /dev/null
+++ b/src/docs/deploy/distributed_en.md
@@ -0,0 +1 @@
+Linkis1.0 分布式部署手册`英文`
\ No newline at end of file
diff --git a/src/docs/deploy/distributed_zh.md b/src/docs/deploy/distributed_zh.md
new file mode 100644
index 0000000..64b73cf
--- /dev/null
+++ b/src/docs/deploy/distributed_zh.md
@@ -0,0 +1 @@
+Linkis1.0 分布式部署手册
\ No newline at end of file
diff --git a/src/docs/deploy/engins_en.md b/src/docs/deploy/engins_en.md
new file mode 100644
index 0000000..9f4e790
--- /dev/null
+++ b/src/docs/deploy/engins_en.md
@@ -0,0 +1 @@
+快速安装 EngineConnPlugin 引擎插件`英文`
\ No newline at end of file
diff --git a/src/docs/deploy/engins_zh.md b/src/docs/deploy/engins_zh.md
new file mode 100644
index 0000000..1a6bd33
--- /dev/null
+++ b/src/docs/deploy/engins_zh.md
@@ -0,0 +1 @@
+快速安装 EngineConnPlugin 引擎插件
\ No newline at end of file
diff --git a/src/docs/deploy/linkis_en.md b/src/docs/deploy/linkis_en.md
new file mode 100644
index 0000000..2dbdae9
--- /dev/null
+++ b/src/docs/deploy/linkis_en.md
@@ -0,0 +1 @@
+英文文档
\ No newline at end of file
diff --git a/src/docs/deploy.md b/src/docs/deploy/linkis_zh.md
similarity index 100%
rename from src/docs/deploy.md
rename to src/docs/deploy/linkis_zh.md
diff --git a/src/docs/deploy/main_en.md b/src/docs/deploy/main_en.md
new file mode 100644
index 0000000..47354d5
--- /dev/null
+++ b/src/docs/deploy/main_en.md
@@ -0,0 +1 @@
+# 部署文档english
\ No newline at end of file
diff --git a/src/docs/deploy/main_zh.md b/src/docs/deploy/main_zh.md
new file mode 100644
index 0000000..cf9de36
--- /dev/null
+++ b/src/docs/deploy/main_zh.md
@@ -0,0 +1 @@
+# 部署文档
\ No newline at end of file
diff --git a/src/docs/deploy/structure_en.md b/src/docs/deploy/structure_en.md
new file mode 100644
index 0000000..1c8df67
--- /dev/null
+++ b/src/docs/deploy/structure_en.md
@@ -0,0 +1 @@
+Linkis1.0 安装包目录层级结构详解`英文`
\ No newline at end of file
diff --git a/src/docs/deploy/structure_zh.md b/src/docs/deploy/structure_zh.md
new file mode 100644
index 0000000..4436609
--- /dev/null
+++ b/src/docs/deploy/structure_zh.md
@@ -0,0 +1 @@
+Linkis1.0 安装包目录层级结构详解
\ No newline at end of file
diff --git a/src/pages/docs/deploy/distributed.vue b/src/pages/docs/deploy/distributed.vue
new file mode 100644
index 0000000..483976b
--- /dev/null
+++ b/src/pages/docs/deploy/distributed.vue
@@ -0,0 +1,13 @@
+<template>
+  <docEn v-if="lang === 'en'"></docEn>
+  <docZh v-else></docZh>
+</template>
+<script setup>
+  import { ref } from "vue";
+
+  import docEn from '../../../docs/deploy/distributed_en.md';
+  import docZh from '../../../docs/deploy/distributed_zh.md';
+
+  // 初始化语言
+  const lang = ref(localStorage.getItem('locale') || 'en');
+</script>
\ No newline at end of file
diff --git a/src/pages/docs/deploy/engins.vue b/src/pages/docs/deploy/engins.vue
new file mode 100644
index 0000000..aa5aef7
--- /dev/null
+++ b/src/pages/docs/deploy/engins.vue
@@ -0,0 +1,13 @@
+<template>
+  <docEn v-if="lang === 'en'"></docEn>
+  <docZh v-else></docZh>
+</template>
+<script setup>
+  import { ref } from "vue";
+
+  import docEn from '../../../docs/deploy/engins_en.md';
+  import docZh from '../../../docs/deploy/engins_zh.md';
+
+  // 初始化语言
+  const lang = ref(localStorage.getItem('locale') || 'en');
+</script>
\ No newline at end of file
diff --git a/src/pages/docs/deploy/linkis.vue b/src/pages/docs/deploy/linkis.vue
new file mode 100644
index 0000000..bcdc984
--- /dev/null
+++ b/src/pages/docs/deploy/linkis.vue
@@ -0,0 +1,13 @@
+<template>
+  <docEn v-if="lang === 'en'"></docEn>
+  <docZh v-else></docZh>
+</template>
+<script setup>
+  import { ref } from "vue";
+
+  import docEn from '../../../docs/deploy/linkis_en.md';
+  import docZh from '../../../docs/deploy/linkis_zh.md';
+
+  // 初始化语言
+  const lang = ref(localStorage.getItem('locale') || 'en');
+</script>
\ No newline at end of file
diff --git a/src/pages/docs/deploy/main.vue b/src/pages/docs/deploy/main.vue
new file mode 100644
index 0000000..e926a16
--- /dev/null
+++ b/src/pages/docs/deploy/main.vue
@@ -0,0 +1,13 @@
+<template>
+  <docEn v-if="lang === 'en'"></docEn>
+  <docZh v-else></docZh>
+</template>
+<script setup>
+  import { ref } from "vue";
+
+  import docEn from '../../../docs/deploy/main_en.md';
+  import docZh from '../../../docs/deploy/main_zh.md';
+
+  // 初始化语言
+  const lang = ref(localStorage.getItem('locale') || 'en');
+</script>
\ No newline at end of file
diff --git a/src/pages/docs/deploy/structure.vue b/src/pages/docs/deploy/structure.vue
new file mode 100644
index 0000000..5a5205e
--- /dev/null
+++ b/src/pages/docs/deploy/structure.vue
@@ -0,0 +1,13 @@
+<template>
+  <docEn v-if="lang === 'en'"></docEn>
+  <docZh v-else></docZh>
+</template>
+<script setup>
+  import { ref } from "vue";
+
+  import docEn from '../../../docs/deploy/structure_en.md';
+  import docZh from '../../../docs/deploy/structure_zh.md';
+
+  // 初始化语言
+  const lang = ref(localStorage.getItem('locale') || 'en');
+</script>
\ No newline at end of file
diff --git a/src/pages/docs.vue b/src/pages/docs/index.vue
similarity index 59%
rename from src/pages/docs.vue
rename to src/pages/docs/index.vue
index c720d2d..21a2eb0 100644
--- a/src/pages/docs.vue
+++ b/src/pages/docs/index.vue
@@ -1,25 +1,25 @@
 <template>
   <div class="ctn-block reading-area">
     <main class="main-content">
-      <deploy></deploy>
+      <router-view></router-view>
     </main>
     <div class="side-bar">
-      <a :href="'#/blog' + doc.anchor" class="bar-item" v-for="(doc,index) in docs" :key="index">{{doc.title}}
-        <a :href="'#/blog' + children.anchor" class="bar-item" v-for="(children,cindex) in doc.children" :key="cindex">{{children.title}}
-        </a>
-      </a>
+      <router-link :to="doc.link" class="bar-item" v-for="(doc,index) in docs" :key="index">{{doc.title}}
+        <router-link :to="children.link" class="bar-item" v-for="(children,cindex) in doc.children" :key="cindex">{{children.title}}
+        </router-link>
+      </router-link>
     </div>
   </div>
 </template>
-<style lang="less" scoped>
+<style lang="less">
   .reading-area {
     display: flex;
     padding: 60px 0;
+    min-height: 600px;
 
     .main-content {
       width: 900px;
       padding: 30px;
-      min-height: 600px;
     }
 
     .side-bar {
@@ -36,22 +36,21 @@
   }
 </style>
 <script setup>
-import deploy from '../docs/deploy.md';
   const docs = [{
     title: '部署文档',
-    anchor: 'deploy',
+    link: '/docs/deploy/main',
     children: [{
       title: '快速部署 Linkis1.0',
-      anchor: 'deploy-linkis'
+      link: '/docs/deploy/linkis',
     }, {
       title: '快速安装 EngineConnPlugin 引擎插件',
-      anchor: 'deploy-engine'
+      link: '/docs/deploy/engins',
     }, {
       title: 'Linkis1.0 分布式部署手册',
-      anchor: 'deploy-handbook'
+      link: '/docs/deploy/distributed',
     }, {
       title: 'Linkis1.0 安装包目录层级结构详解',
-      anchor: 'deploy-detail'
+      link: '/docs/deploy/structure',
     }]
   }]
 </script>
\ No newline at end of file
diff --git a/src/router.js b/src/router.js
index db94704..dde943f 100644
--- a/src/router.js
+++ b/src/router.js
@@ -6,7 +6,28 @@ const routes = [{
   {
     path: '/docs',
     name: 'docs',
-    component: () => import( /* webpackChunkName: "group-docs" */ './pages/docs.vue')
+    component: () => import( /* webpackChunkName: "group-docs" */ './pages/docs/index.vue'),
+    children: [{
+      path: 'deploy',
+      name: 'docDeploy',
+      component: () => import( /* webpackChunkName: "group-doc_linkis" */ './pages/docs/deploy/main.vue')
+    },{
+      path: 'deploy/linkis',
+      name: 'docDeployLinkis',
+      component: () => import( /* webpackChunkName: "group-doc_linkis" */ './pages/docs/deploy/linkis.vue')
+    },{
+      path: 'deploy/engins',
+      name: 'docDeployEngins',
+      component: () => import( /* webpackChunkName: "group-doc_linkis" */ './pages/docs/deploy/engins.vue')
+    },{
+      path: 'deploy/distributed',
+      name: 'docDeployDistributed',
+      component: () => import( /* webpackChunkName: "group-doc_linkis" */ './pages/docs/deploy/distributed.vue')
+    },{
+      path: 'deploy/structure',
+      name: 'docDeployStructure',
+      component: () => import( /* webpackChunkName: "group-doc_linkis" */ './pages/docs/deploy/structure.vue')
+    }]
   },
   {
     path: '/faq',

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 17/50: FIX: 调整路由以及高亮显示

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit c5f5a2079603acf4e7a49360b8ef067d3347bbc0
Author: lucaszhu <lu...@webank.com>
AuthorDate: Tue Oct 12 10:16:42 2021 +0800

    FIX: 调整路由以及高亮显示
---
 src/pages/docs/index.vue |  7 ++++++-
 src/router.js            | 28 +++++++++++-----------------
 2 files changed, 17 insertions(+), 18 deletions(-)

diff --git a/src/pages/docs/index.vue b/src/pages/docs/index.vue
index 735465b..fa3d5f0 100644
--- a/src/pages/docs/index.vue
+++ b/src/pages/docs/index.vue
@@ -13,6 +13,7 @@
     </div>
 </template>
 <style lang="less">
+    @import url('/src/style/variable.less');
     .reading-area {
         display: flex;
         padding: 60px 0;
@@ -32,6 +33,10 @@
                 display: block;
                 padding: 5px 18px;
                 color: #4A4A4A;
+                &:hover,
+                &.router-link-exact-active {
+                    color: @active-color;
+                }
             }
         }
     }
@@ -63,7 +68,7 @@
 
         {
             title: '用户手册',
-            link: '/docs/manual/main',
+            link: '/docs/manual/UserManual',
             children: [
                 {
                     title: '用户使用文档',
diff --git a/src/router.js b/src/router.js
index a509f8e..bf088d4 100644
--- a/src/router.js
+++ b/src/router.js
@@ -18,38 +18,32 @@ const routes = [{
     },{
       path: 'deploy/engins',
       name: 'docDeployEngins',
-      component: () => import( /* webpackChunkName: "group-doc_linkis" */ './pages/docs/deploy/engins.vue')
+      component: () => import( /* webpackChunkName: "group-doc_engins" */ './pages/docs/deploy/engins.vue')
     },{
       path: 'deploy/distributed',
       name: 'docDeployDistributed',
-      component: () => import( /* webpackChunkName: "group-doc_linkis" */ './pages/docs/deploy/distributed.vue')
+      component: () => import( /* webpackChunkName: "group-doc_distributed" */ './pages/docs/deploy/distributed.vue')
     },{
       path: 'deploy/structure',
       name: 'docDeployStructure',
-      component: () => import( /* webpackChunkName: "group-doc_linkis" */ './pages/docs/deploy/structure.vue')
+      component: () => import( /* webpackChunkName: "group-doc_structure" */ './pages/docs/deploy/structure.vue')
     },
-
-    // {
-    //   path: 'manual',
-    //   name: '',
-    //   component: () => import( /* webpackChunkName: "group-doc_linkis" */ './pages/docs/manual/main.vue')
-    // },
     {
       path: 'manual/UserManual',
-      name: '',
-      component: () => import( /* webpackChunkName: "group-doc_linkis" */ './pages/docs/manual/UserManual.vue')
+      name: 'manualUserManual',
+      component: () => import( /* webpackChunkName: "group-doc_UserManual" */ './pages/docs/manual/UserManual.vue')
     },{
       path: 'manual/HowToUse',
-      name: '',
-      component: () => import( /* webpackChunkName: "group-doc_linkis" */ './pages/docs/manual/HowToUse.vue')
+      name: 'manual/HowToUse',
+      component: () => import( /* webpackChunkName: "group-doc_HowToUse" */ './pages/docs/manual/HowToUse.vue')
     },{
       path: 'manual/ConsoleUserManual',
-      name: '',
-      component: () => import( /* webpackChunkName: "group-doc_linkis" */ './pages/docs/manual/ConsoleUserManual.vue')
+      name: 'manualConsoleUserManual',
+      component: () => import( /* webpackChunkName: "group-doc_ConsoleUserManual" */ './pages/docs/manual/ConsoleUserManual.vue')
     },{
         path: 'manual/CliManual',
-        name: '',
-        component: () => import( /* webpackChunkName: "group-doc_linkis" */ './pages/docs/manual/CliManual.vue')
+        name: 'manualCliManual',
+        component: () => import( /* webpackChunkName: "group-doc_CliManual" */ './pages/docs/manual/CliManual.vue')
       }]
   },
   {

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 39/50: update logo img

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 039f3256343fc6a2319ee0ca47b2d3f07b12a151
Author: casionone <ca...@gmail.com>
AuthorDate: Mon Oct 18 16:46:25 2021 +0800

    update logo img
---
 src/assets/user/360.png                             | Bin 14956 -> 14323 bytes
 src/assets/user/97wulian.png                        | Bin 19333 -> 28819 bytes
 "src/assets/user/T3\345\207\272\350\241\214.png"    | Bin 11196 -> 7258 bytes
 src/assets/user/aisino.png                          | Bin 33715 -> 46944 bytes
 src/assets/user/boss.png                            | Bin 9165 -> 8386 bytes
 src/assets/user/huazhong.jpg                        | Bin 9938 -> 12673 bytes
 src/assets/user/lianchuang.png                      | Bin 34598 -> 11438 bytes
 src/assets/user/mobtech..png                        | Bin 11203 -> 1829 bytes
 src/assets/user/xidian.jpg                          | Bin 9354 -> 12475 bytes
 src/assets/user/yitu.png                            | Bin 16224 -> 41437 bytes
 src/assets/user/zhongticaipng.png                   | Bin 22253 -> 31958 bytes
 ...270\207\347\247\221\351\207\207\347\255\221.png" | Bin 4479 -> 2468 bytes
 .../user/\344\270\234\346\226\271\351\200\232.png"  | Bin 20974 -> 33873 bytes
 ...260\221\347\224\237\351\223\266\350\241\214.jpg" | Bin 5007 -> 16640 bytes
 ...270\255\345\233\275\347\224\265\344\277\241.png" | Bin 11450 -> 6468 bytes
 ...270\255\345\233\275\347\224\265\347\247\221.jpg" | Bin 5108 -> 5955 bytes
 ...270\255\351\200\232\344\272\221\344\273\223.png" | Bin 27653 -> 20138 bytes
 ...234\211\351\231\220\345\205\254\345\217\270.png" | Bin 19180 -> 10006 bytes
 ...261\237\345\256\236\351\252\214\345\256\244.png" | Bin 17558 -> 13145 bytes
 ...272\221\345\233\276\347\247\221\346\212\200.png" | Bin 23360 -> 35242 bytes
 ...272\244\351\200\232\351\223\266\350\241\214.jpg" | Bin 6173 -> 8099 bytes
 ...272\254\344\270\234\346\225\260\347\247\221.jpg" | Bin 4260 -> 7895 bytes
 ...277\241\347\224\250\347\224\237\346\264\273.png" | Bin 10504 -> 3978 bytes
 .../user/\345\223\227\345\225\246\345\225\246.jpg"  | Bin 2707 -> 5990 bytes
 ...234\210\345\244\226\345\220\214\345\255\246.png" | Bin 15296 -> 8081 bytes
 .../user/\345\244\251\347\277\274\344\272\221.png"  | Bin 24944 -> 39592 bytes
 "src/assets/user/\345\271\263\345\256\211.png"      | Bin 19563 -> 20795 bytes
 ...214\273\344\277\235\347\247\221\346\212\200.png" | Bin 9949 -> 2083 bytes
 ...272\221\345\276\231\347\247\221\346\212\200.png" | Bin 5315 -> 15448 bytes
 ...203\275\345\244\247\346\225\260\346\215\256.png" | Bin 20687 -> 13462 bytes
 ...213\233\345\225\206\351\223\266\350\241\214.jpg" | Bin 5594 -> 10462 bytes
 ...234\211\351\231\220\345\205\254\345\217\270.png" | Bin 21785 -> 29500 bytes
 ...224\265\351\255\202\347\275\221\347\273\234.png" | Bin 8600 -> 5553 bytes
 ...241\224\345\255\220\345\210\206\346\234\237.png" | Bin 16286 -> 6968 bytes
 ...265\267\345\272\267\345\250\201\350\247\206.png" | Bin 27218 -> 22412 bytes
 ...220\206\346\203\263\346\261\275\350\275\246.png" | Bin 16511 -> 27672 bytes
 ...231\276\344\277\241\351\223\266\350\241\214.jpg" | Bin 4048 -> 6739 bytes
 .../user/\347\231\276\346\234\233\344\272\221.png"  | Bin 17617 -> 24473 bytes
 ...253\213\345\210\233\345\225\206\345\237\216.png" | Bin 27107 -> 24213 bytes
 ...272\242\350\261\241\344\272\221\350\205\276.png" | Bin 10362 -> 4596 bytes
 ...276\216\345\233\242\347\202\271\350\257\204.jpg" | Bin 5183 -> 10596 bytes
 ...205\276\350\256\257\350\264\242\347\273\217.jpg" | Bin 6136 -> 14500 bytes
 ...211\276\344\275\263\347\224\237\346\264\273.jpg" | Bin 4355 -> 5444 bytes
 ...224\232\346\235\245\346\261\275\350\275\246.jpg" | Bin 5672 -> 7034 bytes
 ...202\256\346\224\277\351\223\266\350\241\214.jpg" | Bin 6134 -> 14657 bytes
 ...241\266\347\202\271\350\275\257\344\273\266.png" | Bin 12568 -> 8796 bytes
 46 files changed, 0 insertions(+), 0 deletions(-)

diff --git a/src/assets/user/360.png b/src/assets/user/360.png
index 88e0d4c..74b5d13 100644
Binary files a/src/assets/user/360.png and b/src/assets/user/360.png differ
diff --git a/src/assets/user/97wulian.png b/src/assets/user/97wulian.png
index 6d72b3f..5b828b1 100644
Binary files a/src/assets/user/97wulian.png and b/src/assets/user/97wulian.png differ
diff --git "a/src/assets/user/T3\345\207\272\350\241\214.png" "b/src/assets/user/T3\345\207\272\350\241\214.png"
index b041927..1491def 100644
Binary files "a/src/assets/user/T3\345\207\272\350\241\214.png" and "b/src/assets/user/T3\345\207\272\350\241\214.png" differ
diff --git a/src/assets/user/aisino.png b/src/assets/user/aisino.png
index d35e2ce..73b7589 100644
Binary files a/src/assets/user/aisino.png and b/src/assets/user/aisino.png differ
diff --git a/src/assets/user/boss.png b/src/assets/user/boss.png
index e96f42a..17bb2b2 100644
Binary files a/src/assets/user/boss.png and b/src/assets/user/boss.png differ
diff --git a/src/assets/user/huazhong.jpg b/src/assets/user/huazhong.jpg
index 4821862..70e557f 100644
Binary files a/src/assets/user/huazhong.jpg and b/src/assets/user/huazhong.jpg differ
diff --git a/src/assets/user/lianchuang.png b/src/assets/user/lianchuang.png
index 64c44b4..1320cbe 100644
Binary files a/src/assets/user/lianchuang.png and b/src/assets/user/lianchuang.png differ
diff --git a/src/assets/user/mobtech..png b/src/assets/user/mobtech..png
index d026cff..0ba017e 100644
Binary files a/src/assets/user/mobtech..png and b/src/assets/user/mobtech..png differ
diff --git a/src/assets/user/xidian.jpg b/src/assets/user/xidian.jpg
index 558341e..dc37326 100644
Binary files a/src/assets/user/xidian.jpg and b/src/assets/user/xidian.jpg differ
diff --git a/src/assets/user/yitu.png b/src/assets/user/yitu.png
index 8bf51ea..58aaa3f 100644
Binary files a/src/assets/user/yitu.png and b/src/assets/user/yitu.png differ
diff --git a/src/assets/user/zhongticaipng.png b/src/assets/user/zhongticaipng.png
index eb97549..c343ba5 100644
Binary files a/src/assets/user/zhongticaipng.png and b/src/assets/user/zhongticaipng.png differ
diff --git "a/src/assets/user/\344\270\207\347\247\221\351\207\207\347\255\221.png" "b/src/assets/user/\344\270\207\347\247\221\351\207\207\347\255\221.png"
index 58e60be..35f056c 100644
Binary files "a/src/assets/user/\344\270\207\347\247\221\351\207\207\347\255\221.png" and "b/src/assets/user/\344\270\207\347\247\221\351\207\207\347\255\221.png" differ
diff --git "a/src/assets/user/\344\270\234\346\226\271\351\200\232.png" "b/src/assets/user/\344\270\234\346\226\271\351\200\232.png"
index 852fd81..72fde94 100644
Binary files "a/src/assets/user/\344\270\234\346\226\271\351\200\232.png" and "b/src/assets/user/\344\270\234\346\226\271\351\200\232.png" differ
diff --git "a/src/assets/user/\344\270\255\345\233\275\346\260\221\347\224\237\351\223\266\350\241\214.jpg" "b/src/assets/user/\344\270\255\345\233\275\346\260\221\347\224\237\351\223\266\350\241\214.jpg"
index 3e72301..e5fb3b5 100644
Binary files "a/src/assets/user/\344\270\255\345\233\275\346\260\221\347\224\237\351\223\266\350\241\214.jpg" and "b/src/assets/user/\344\270\255\345\233\275\346\260\221\347\224\237\351\223\266\350\241\214.jpg" differ
diff --git "a/src/assets/user/\344\270\255\345\233\275\347\224\265\344\277\241.png" "b/src/assets/user/\344\270\255\345\233\275\347\224\265\344\277\241.png"
index 76bcf7a..f34cc37 100644
Binary files "a/src/assets/user/\344\270\255\345\233\275\347\224\265\344\277\241.png" and "b/src/assets/user/\344\270\255\345\233\275\347\224\265\344\277\241.png" differ
diff --git "a/src/assets/user/\344\270\255\345\233\275\347\224\265\347\247\221.jpg" "b/src/assets/user/\344\270\255\345\233\275\347\224\265\347\247\221.jpg"
index 328dfa8..589617f 100644
Binary files "a/src/assets/user/\344\270\255\345\233\275\347\224\265\347\247\221.jpg" and "b/src/assets/user/\344\270\255\345\233\275\347\224\265\347\247\221.jpg" differ
diff --git "a/src/assets/user/\344\270\255\351\200\232\344\272\221\344\273\223.png" "b/src/assets/user/\344\270\255\351\200\232\344\272\221\344\273\223.png"
index bf374b6..7a27229 100644
Binary files "a/src/assets/user/\344\270\255\351\200\232\344\272\221\344\273\223.png" and "b/src/assets/user/\344\270\255\351\200\232\344\272\221\344\273\223.png" differ
diff --git "a/src/assets/user/\344\270\255\351\200\232\346\234\215\345\205\254\344\274\227\344\277\241\346\201\257\350\202\241\344\273\275\346\234\211\351\231\220\345\205\254\345\217\270.png" "b/src/assets/user/\344\270\255\351\200\232\346\234\215\345\205\254\344\274\227\344\277\241\346\201\257\350\202\241\344\273\275\346\234\211\351\231\220\345\205\254\345\217\270.png"
index cf4a9c3..8946372 100644
Binary files "a/src/assets/user/\344\270\255\351\200\232\346\234\215\345\205\254\344\274\227\344\277\241\346\201\257\350\202\241\344\273\275\346\234\211\351\231\220\345\205\254\345\217\270.png" and "b/src/assets/user/\344\270\255\351\200\232\346\234\215\345\205\254\344\274\227\344\277\241\346\201\257\350\202\241\344\273\275\346\234\211\351\231\220\345\205\254\345\217\270.png" differ
diff --git "a/src/assets/user/\344\271\213\346\261\237\345\256\236\351\252\214\345\256\244.png" "b/src/assets/user/\344\271\213\346\261\237\345\256\236\351\252\214\345\256\244.png"
index 5e8bb40..1fbe9ce 100644
Binary files "a/src/assets/user/\344\271\213\346\261\237\345\256\236\351\252\214\345\256\244.png" and "b/src/assets/user/\344\271\213\346\261\237\345\256\236\351\252\214\345\256\244.png" differ
diff --git "a/src/assets/user/\344\272\221\345\233\276\347\247\221\346\212\200.png" "b/src/assets/user/\344\272\221\345\233\276\347\247\221\346\212\200.png"
index ecca9a8..249aaaa 100644
Binary files "a/src/assets/user/\344\272\221\345\233\276\347\247\221\346\212\200.png" and "b/src/assets/user/\344\272\221\345\233\276\347\247\221\346\212\200.png" differ
diff --git "a/src/assets/user/\344\272\244\351\200\232\351\223\266\350\241\214.jpg" "b/src/assets/user/\344\272\244\351\200\232\351\223\266\350\241\214.jpg"
index 67dc266..c2232c7 100644
Binary files "a/src/assets/user/\344\272\244\351\200\232\351\223\266\350\241\214.jpg" and "b/src/assets/user/\344\272\244\351\200\232\351\223\266\350\241\214.jpg" differ
diff --git "a/src/assets/user/\344\272\254\344\270\234\346\225\260\347\247\221.jpg" "b/src/assets/user/\344\272\254\344\270\234\346\225\260\347\247\221.jpg"
index 4e48bd3..7a98336 100644
Binary files "a/src/assets/user/\344\272\254\344\270\234\346\225\260\347\247\221.jpg" and "b/src/assets/user/\344\272\254\344\270\234\346\225\260\347\247\221.jpg" differ
diff --git "a/src/assets/user/\344\277\241\347\224\250\347\224\237\346\264\273.png" "b/src/assets/user/\344\277\241\347\224\250\347\224\237\346\264\273.png"
index 9af5495..8a767b1 100644
Binary files "a/src/assets/user/\344\277\241\347\224\250\347\224\237\346\264\273.png" and "b/src/assets/user/\344\277\241\347\224\250\347\224\237\346\264\273.png" differ
diff --git "a/src/assets/user/\345\223\227\345\225\246\345\225\246.jpg" "b/src/assets/user/\345\223\227\345\225\246\345\225\246.jpg"
index 2ae1506..3d94cd0 100644
Binary files "a/src/assets/user/\345\223\227\345\225\246\345\225\246.jpg" and "b/src/assets/user/\345\223\227\345\225\246\345\225\246.jpg" differ
diff --git "a/src/assets/user/\345\234\210\345\244\226\345\220\214\345\255\246.png" "b/src/assets/user/\345\234\210\345\244\226\345\220\214\345\255\246.png"
index 494cf8f..fc623d4 100644
Binary files "a/src/assets/user/\345\234\210\345\244\226\345\220\214\345\255\246.png" and "b/src/assets/user/\345\234\210\345\244\226\345\220\214\345\255\246.png" differ
diff --git "a/src/assets/user/\345\244\251\347\277\274\344\272\221.png" "b/src/assets/user/\345\244\251\347\277\274\344\272\221.png"
index 0f26451..8973744 100644
Binary files "a/src/assets/user/\345\244\251\347\277\274\344\272\221.png" and "b/src/assets/user/\345\244\251\347\277\274\344\272\221.png" differ
diff --git "a/src/assets/user/\345\271\263\345\256\211.png" "b/src/assets/user/\345\271\263\345\256\211.png"
index 861fb26..4895178 100644
Binary files "a/src/assets/user/\345\271\263\345\256\211.png" and "b/src/assets/user/\345\271\263\345\256\211.png" differ
diff --git "a/src/assets/user/\345\271\263\345\256\211\345\214\273\344\277\235\347\247\221\346\212\200.png" "b/src/assets/user/\345\271\263\345\256\211\345\214\273\344\277\235\347\247\221\346\212\200.png"
index 7b019f5..156be44 100644
Binary files "a/src/assets/user/\345\271\263\345\256\211\345\214\273\344\277\235\347\247\221\346\212\200.png" and "b/src/assets/user/\345\271\263\345\256\211\345\214\273\344\277\235\347\247\221\346\212\200.png" differ
diff --git "a/src/assets/user/\345\271\277\345\267\236\344\272\221\345\276\231\347\247\221\346\212\200.png" "b/src/assets/user/\345\271\277\345\267\236\344\272\221\345\276\231\347\247\221\346\212\200.png"
index 5e027bd..6783b0f 100644
Binary files "a/src/assets/user/\345\271\277\345\267\236\344\272\221\345\276\231\347\247\221\346\212\200.png" and "b/src/assets/user/\345\271\277\345\267\236\344\272\221\345\276\231\347\247\221\346\212\200.png" differ
diff --git "a/src/assets/user/\346\210\220\351\203\275\345\244\247\346\225\260\346\215\256.png" "b/src/assets/user/\346\210\220\351\203\275\345\244\247\346\225\260\346\215\256.png"
index dfaa99d..f6a7e4e 100644
Binary files "a/src/assets/user/\346\210\220\351\203\275\345\244\247\346\225\260\346\215\256.png" and "b/src/assets/user/\346\210\220\351\203\275\345\244\247\346\225\260\346\215\256.png" differ
diff --git "a/src/assets/user/\346\213\233\345\225\206\351\223\266\350\241\214.jpg" "b/src/assets/user/\346\213\233\345\225\206\351\223\266\350\241\214.jpg"
index b83e1da..8f3d41a 100644
Binary files "a/src/assets/user/\346\213\233\345\225\206\351\223\266\350\241\214.jpg" and "b/src/assets/user/\346\213\233\345\225\206\351\223\266\350\241\214.jpg" differ
diff --git "a/src/assets/user/\346\213\233\350\201\224\346\266\210\350\264\271\351\207\221\350\236\215\346\234\211\351\231\220\345\205\254\345\217\270.png" "b/src/assets/user/\346\213\233\350\201\224\346\266\210\350\264\271\351\207\221\350\236\215\346\234\211\351\231\220\345\205\254\345\217\270.png"
index 9d2ba48..7a39d07 100644
Binary files "a/src/assets/user/\346\213\233\350\201\224\346\266\210\350\264\271\351\207\221\350\236\215\346\234\211\351\231\220\345\205\254\345\217\270.png" and "b/src/assets/user/\346\213\233\350\201\224\346\266\210\350\264\271\351\207\221\350\236\215\346\234\211\351\231\220\345\205\254\345\217\270.png" differ
diff --git "a/src/assets/user/\346\235\255\345\267\236\347\224\265\351\255\202\347\275\221\347\273\234.png" "b/src/assets/user/\346\235\255\345\267\236\347\224\265\351\255\202\347\275\221\347\273\234.png"
index 7a720df..bc61646 100644
Binary files "a/src/assets/user/\346\235\255\345\267\236\347\224\265\351\255\202\347\275\221\347\273\234.png" and "b/src/assets/user/\346\235\255\345\267\236\347\224\265\351\255\202\347\275\221\347\273\234.png" differ
diff --git "a/src/assets/user/\346\241\224\345\255\220\345\210\206\346\234\237.png" "b/src/assets/user/\346\241\224\345\255\220\345\210\206\346\234\237.png"
index ff3b65a..3ff45b8 100644
Binary files "a/src/assets/user/\346\241\224\345\255\220\345\210\206\346\234\237.png" and "b/src/assets/user/\346\241\224\345\255\220\345\210\206\346\234\237.png" differ
diff --git "a/src/assets/user/\346\265\267\345\272\267\345\250\201\350\247\206.png" "b/src/assets/user/\346\265\267\345\272\267\345\250\201\350\247\206.png"
index 0d38210..a961cc4 100644
Binary files "a/src/assets/user/\346\265\267\345\272\267\345\250\201\350\247\206.png" and "b/src/assets/user/\346\265\267\345\272\267\345\250\201\350\247\206.png" differ
diff --git "a/src/assets/user/\347\220\206\346\203\263\346\261\275\350\275\246.png" "b/src/assets/user/\347\220\206\346\203\263\346\261\275\350\275\246.png"
index 161c4a5..3c0c20f 100644
Binary files "a/src/assets/user/\347\220\206\346\203\263\346\261\275\350\275\246.png" and "b/src/assets/user/\347\220\206\346\203\263\346\261\275\350\275\246.png" differ
diff --git "a/src/assets/user/\347\231\276\344\277\241\351\223\266\350\241\214.jpg" "b/src/assets/user/\347\231\276\344\277\241\351\223\266\350\241\214.jpg"
index 130810f..e338788 100644
Binary files "a/src/assets/user/\347\231\276\344\277\241\351\223\266\350\241\214.jpg" and "b/src/assets/user/\347\231\276\344\277\241\351\223\266\350\241\214.jpg" differ
diff --git "a/src/assets/user/\347\231\276\346\234\233\344\272\221.png" "b/src/assets/user/\347\231\276\346\234\233\344\272\221.png"
index 8ce7aef..90395c6 100644
Binary files "a/src/assets/user/\347\231\276\346\234\233\344\272\221.png" and "b/src/assets/user/\347\231\276\346\234\233\344\272\221.png" differ
diff --git "a/src/assets/user/\347\253\213\345\210\233\345\225\206\345\237\216.png" "b/src/assets/user/\347\253\213\345\210\233\345\225\206\345\237\216.png"
index c5520fa..ca71850 100644
Binary files "a/src/assets/user/\347\253\213\345\210\233\345\225\206\345\237\216.png" and "b/src/assets/user/\347\253\213\345\210\233\345\225\206\345\237\216.png" differ
diff --git "a/src/assets/user/\347\272\242\350\261\241\344\272\221\350\205\276.png" "b/src/assets/user/\347\272\242\350\261\241\344\272\221\350\205\276.png"
index fda67c5..bd54887 100644
Binary files "a/src/assets/user/\347\272\242\350\261\241\344\272\221\350\205\276.png" and "b/src/assets/user/\347\272\242\350\261\241\344\272\221\350\205\276.png" differ
diff --git "a/src/assets/user/\347\276\216\345\233\242\347\202\271\350\257\204.jpg" "b/src/assets/user/\347\276\216\345\233\242\347\202\271\350\257\204.jpg"
index 36e37e3..33fda33 100644
Binary files "a/src/assets/user/\347\276\216\345\233\242\347\202\271\350\257\204.jpg" and "b/src/assets/user/\347\276\216\345\233\242\347\202\271\350\257\204.jpg" differ
diff --git "a/src/assets/user/\350\205\276\350\256\257\350\264\242\347\273\217.jpg" "b/src/assets/user/\350\205\276\350\256\257\350\264\242\347\273\217.jpg"
index 1a2953c..d409f43 100644
Binary files "a/src/assets/user/\350\205\276\350\256\257\350\264\242\347\273\217.jpg" and "b/src/assets/user/\350\205\276\350\256\257\350\264\242\347\273\217.jpg" differ
diff --git "a/src/assets/user/\350\211\276\344\275\263\347\224\237\346\264\273.jpg" "b/src/assets/user/\350\211\276\344\275\263\347\224\237\346\264\273.jpg"
index b7380cf..ab32413 100644
Binary files "a/src/assets/user/\350\211\276\344\275\263\347\224\237\346\264\273.jpg" and "b/src/assets/user/\350\211\276\344\275\263\347\224\237\346\264\273.jpg" differ
diff --git "a/src/assets/user/\350\224\232\346\235\245\346\261\275\350\275\246.jpg" "b/src/assets/user/\350\224\232\346\235\245\346\261\275\350\275\246.jpg"
index b0ee1fe..c1df2ac 100644
Binary files "a/src/assets/user/\350\224\232\346\235\245\346\261\275\350\275\246.jpg" and "b/src/assets/user/\350\224\232\346\235\245\346\261\275\350\275\246.jpg" differ
diff --git "a/src/assets/user/\351\202\256\346\224\277\351\223\266\350\241\214.jpg" "b/src/assets/user/\351\202\256\346\224\277\351\223\266\350\241\214.jpg"
index 7847eac..02356c9 100644
Binary files "a/src/assets/user/\351\202\256\346\224\277\351\223\266\350\241\214.jpg" and "b/src/assets/user/\351\202\256\346\224\277\351\223\266\350\241\214.jpg" differ
diff --git "a/src/assets/user/\351\241\266\347\202\271\350\275\257\344\273\266.png" "b/src/assets/user/\351\241\266\347\202\271\350\275\257\344\273\266.png"
index 8eef1ff..8e80dd0 100644
Binary files "a/src/assets/user/\351\241\266\347\202\271\350\275\257\344\273\266.png" and "b/src/assets/user/\351\241\266\347\202\271\350\275\257\344\273\266.png" differ

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 37/50: Merge branch 'add-docs' of git.weoa.com:mumblefe/linkis-web-apache into add-docs

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 6be9ef0bf77e3b935384deae681c3e6e4487ac86
Merge: c6345d9 19f5506
Author: lucaszhu <lu...@webank.com>
AuthorDate: Mon Oct 18 15:26:01 2021 +0800

    Merge branch 'add-docs' of git.weoa.com:mumblefe/linkis-web-apache into add-docs

 src/assets/user/360.png                            | Bin 0 -> 14956 bytes
 src/assets/user/97wulian.png                       | Bin 28819 -> 19333 bytes
 "src/assets/user/T3\345\207\272\350\241\214.png"   | Bin 7258 -> 11196 bytes
 src/assets/user/aisino.png                         | Bin 46944 -> 33715 bytes
 src/assets/user/boss.png                           | Bin 8386 -> 9165 bytes
 src/assets/user/huazhong.jpg                       | Bin 12673 -> 9938 bytes
 src/assets/user/lianchuang.png                     | Bin 11438 -> 34598 bytes
 src/assets/user/mobtech..png                       | Bin 1829 -> 11203 bytes
 src/assets/user/others/360.png                     | Bin 14323 -> 0 bytes
 ...60\221\347\224\237\351\223\266\350\241\214.jpg" | Bin 16640 -> 0 bytes
 ...70\255\345\233\275\347\224\265\347\247\221.jpg" | Bin 5955 -> 0 bytes
 ...72\221\345\233\276\347\247\221\346\212\200.png" | Bin 35242 -> 0 bytes
 ...72\244\351\200\232\351\223\266\350\241\214.jpg" | Bin 8099 -> 0 bytes
 ...72\254\344\270\234\346\225\260\347\247\221.jpg" | Bin 7895 -> 0 bytes
 .../\345\244\251\347\277\274\344\272\221.png"      | Bin 39592 -> 0 bytes
 ...13\233\345\225\206\351\223\266\350\241\214.jpg" | Bin 10462 -> 0 bytes
 ...31\276\344\277\241\351\223\266\350\241\214.jpg" | Bin 6739 -> 0 bytes
 ...76\216\345\233\242\347\202\271\350\257\204.jpg" | Bin 10596 -> 0 bytes
 ...05\276\350\256\257\350\264\242\347\273\217.jpg" | Bin 14500 -> 0 bytes
 ...24\232\346\235\245\346\261\275\350\275\246.jpg" | Bin 7034 -> 0 bytes
 ...02\256\346\224\277\351\223\266\350\241\214.jpg" | Bin 14657 -> 0 bytes
 src/assets/user/xidian.jpg                         | Bin 12475 -> 9354 bytes
 src/assets/user/yitu.png                           | Bin 41437 -> 16224 bytes
 src/assets/user/zhongticaipng.png                  | Bin 31958 -> 22253 bytes
 ...70\207\347\247\221\351\207\207\347\255\221.png" | Bin 2468 -> 4479 bytes
 .../user/\344\270\234\346\226\271\351\200\232.png" | Bin 33873 -> 20974 bytes
 ...60\221\347\224\237\351\223\266\350\241\214.jpg" | Bin 0 -> 5007 bytes
 ...70\255\345\233\275\347\224\265\344\277\241.png" | Bin 6468 -> 11450 bytes
 ...70\255\345\233\275\347\224\265\347\247\221.jpg" | Bin 0 -> 5108 bytes
 ...70\255\351\200\232\344\272\221\344\273\223.png" | Bin 20138 -> 27653 bytes
 ...34\211\351\231\220\345\205\254\345\217\270.png" | Bin 10006 -> 19180 bytes
 ...61\237\345\256\236\351\252\214\345\256\244.png" | Bin 13145 -> 17558 bytes
 ...72\221\345\233\276\347\247\221\346\212\200.png" | Bin 0 -> 23360 bytes
 ...72\244\351\200\232\351\223\266\350\241\214.jpg" | Bin 0 -> 6173 bytes
 ...72\254\344\270\234\346\225\260\347\247\221.jpg" | Bin 0 -> 4260 bytes
 ...77\241\347\224\250\347\224\237\346\264\273.png" | Bin 3978 -> 10504 bytes
 .../user/\345\223\227\345\225\246\345\225\246.jpg" | Bin 5990 -> 2707 bytes
 ...34\210\345\244\226\345\220\214\345\255\246.png" | Bin 8081 -> 15296 bytes
 .../user/\345\244\251\347\277\274\344\272\221.png" | Bin 0 -> 24944 bytes
 "src/assets/user/\345\271\263\345\256\211.png"     | Bin 20795 -> 19563 bytes
 ...14\273\344\277\235\347\247\221\346\212\200.png" | Bin 2083 -> 9949 bytes
 ...72\221\345\276\231\347\247\221\346\212\200.png" | Bin 15448 -> 5315 bytes
 ...03\275\345\244\247\346\225\260\346\215\256.png" | Bin 13462 -> 20687 bytes
 ...13\233\345\225\206\351\223\266\350\241\214.jpg" | Bin 0 -> 5594 bytes
 ...34\211\351\231\220\345\205\254\345\217\270.png" | Bin 29500 -> 21785 bytes
 ...24\265\351\255\202\347\275\221\347\273\234.png" | Bin 5553 -> 8600 bytes
 ...41\224\345\255\220\345\210\206\346\234\237.png" | Bin 6968 -> 16286 bytes
 ...65\267\345\272\267\345\250\201\350\247\206.png" | Bin 22412 -> 27218 bytes
 ...20\206\346\203\263\346\261\275\350\275\246.png" | Bin 27672 -> 16511 bytes
 ...31\276\344\277\241\351\223\266\350\241\214.jpg" | Bin 0 -> 4048 bytes
 .../user/\347\231\276\346\234\233\344\272\221.png" | Bin 24473 -> 17617 bytes
 ...53\213\345\210\233\345\225\206\345\237\216.png" | Bin 24213 -> 27107 bytes
 ...72\242\350\261\241\344\272\221\350\205\276.png" | Bin 4596 -> 10362 bytes
 ...76\216\345\233\242\347\202\271\350\257\204.jpg" | Bin 0 -> 5183 bytes
 ...05\276\350\256\257\350\264\242\347\273\217.jpg" | Bin 0 -> 6136 bytes
 ...11\276\344\275\263\347\224\237\346\264\273.jpg" | Bin 5444 -> 4355 bytes
 ...20\250\346\221\251\350\200\266\344\272\221.png" | Bin 5501 -> 10090 bytes
 ...24\232\346\235\245\346\261\275\350\275\246.jpg" | Bin 0 -> 5672 bytes
 ...02\256\346\224\277\351\223\266\350\241\214.jpg" | Bin 0 -> 6134 bytes
 ...41\266\347\202\271\350\275\257\344\273\266.png" | Bin 8796 -> 12568 bytes
 src/pages/home/img.js                              |  50 +++++++++++++++++++++
 src/pages/home/index.vue                           |  15 +++----
 62 files changed, 55 insertions(+), 10 deletions(-)


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 45/50: Web visual optimization

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 23051157e6ba7436ada998ffcdee1437b6701712
Author: casionone <ca...@gmail.com>
AuthorDate: Thu Oct 28 19:38:56 2021 +0800

    Web visual optimization
---
 .asf.yaml                                              |   9 ++++++---
 assets/360.bc39c47a.png                                | Bin 14323 -> 0 bytes
 assets/360.cd40bc4b.png                                | Bin 0 -> 5121 bytes
 assets/404.f24f37c0.js                                 |   1 +
 "assets/97\347\211\251\350\201\224.159781fb.png"       | Bin 0 -> 5949 bytes
 "assets/97\347\211\251\350\201\224.2447251c.png"       | Bin 28819 -> 0 bytes
 assets/AddEngineConn.467c2210.js                       |   1 -
 assets/ECM-01.bb056ebe.png                             | Bin 0 -> 34340 bytes
 assets/ECM-02.a90e3890.png                             | Bin 0 -> 25340 bytes
 assets/Linkis1.0-architecture.be03428f.png             | Bin 0 -> 72168 bytes
 assets/Linkis_1.0_architecture.ba18dcdc.png            | Bin 0 -> 316746 bytes
 "assets/T3\345\207\272\350\241\214.1738b528.png"       | Bin 6413 -> 0 bytes
 "assets/T3\345\207\272\350\241\214.9d8b64de.png"       | Bin 0 -> 15872 bytes
 assets/add_an_engineConn_flow_chart.5a1c06c5.js        |   1 +
 assets/add_engine.b12c7e06.js                          |   1 +
 assets/after_linkis_bg.31ad71dc.png                    | Bin 0 -> 7029 bytes
 assets/after_linkis_cn.f311973b.png                    | Bin 0 -> 645519 bytes
 assets/after_linkis_en.c3ed71bf.png                    | Bin 111924 -> 0 bytes
 assets/after_linkis_en.eafe79c9.png                    | Bin 0 -> 33986 bytes
 assets/after_linkis_zh.bf948a76.png                    | Bin 0 -> 31918 bytes
 assets/app-manager-02.2aff8a98.png                     | Bin 0 -> 701283 bytes
 assets/app-manager-03.5aaff6ed.png                     | Bin 0 -> 69489 bytes
 assets/app_manager.bed25273.js                         |   1 +
 assets/banner_bg.b3665793.png                          | Bin 0 -> 136546 bytes
 assets/before_linkis_cn.6c6e76e4.png                   | Bin 0 -> 332201 bytes
 assets/before_linkis_en.076cf10c.png                   | Bin 142195 -> 0 bytes
 assets/before_linkis_en.58065890.png                   | Bin 0 -> 46019 bytes
 assets/before_linkis_zh.2ec86cff.png                   | Bin 0 -> 43458 bytes
 assets/bml-02.0eb3b26a.png                             | Bin 0 -> 55227 bytes
 assets/bml.59ba7d32.js                                 |   1 +
 "assets/boss\347\233\264\350\201\230.5353720c.png"     | Bin 8386 -> 0 bytes
 assets/computation_governance.3a8ad59d.js              |   1 +
 assets/configuration.a2fe2e50.js                       |   1 +
 assets/connectivity.7ada0256.png                       | Bin 0 -> 5136 bytes
 ...nsoleUserManual.d2af8060.js => console.ec03cad4.js} |   2 +-
 assets/context_service.13b75bb1.js                     |   1 +
 assets/contributing.e1c72372.js                        |   1 +
 assets/controllability.c2cb45d7.png                    | Bin 0 -> 4808 bytes
 assets/datasource.d410aafc.js                          |   1 +
 assets/description.95f7a296.png                        | Bin 28065 -> 0 bytes
 assets/description.bee4d876.png                        | Bin 0 -> 44834 bytes
 ...tween1.0&0.x.7e9c261e.js => difference.546832ac.js} |   2 +-
 ...distributed.6a61f64e.js => distributed.89154171.js} |   2 +-
 assets/download.0330f828.css                           |   1 +
 assets/download.4f121175.js                            |   1 +
 assets/download.8c6e40f3.css                           |   1 -
 assets/download.c3e47cb5.js                            |   1 -
 assets/engine_start_process.f86c8e8a.js                |   1 +
 assets/engineconn-01.b4d20b76.png                      | Bin 0 -> 157753 bytes
 assets/engineconn.efe3f534.js                          |   1 +
 assets/engineconn_manager.563abdf4.js                  |   1 +
 assets/engineconn_plugin.0c1c8f49.js                   |   1 +
 assets/{engins.2a41b1a0.js => engins.a82546f2.js}      |   2 +-
 assets/event.29571be3.js                               |   1 -
 assets/event.b677bf34.js                               |   1 +
 assets/features_bg.2b28bb9d.png                        | Bin 0 -> 120511 bytes
 assets/gateway.b29c03a6.js                             |   1 +
 assets/gateway_server_dispatcher.d2241ca2.png          | Bin 0 -> 47910 bytes
 assets/gateway_server_global.9fae8e50.png              | Bin 0 -> 36652 bytes
 assets/gatway_websocket.3d3c7dfa.png                   | Bin 0 -> 16292 bytes
 assets/hive-config.b2dec89f.png                        | Bin 0 -> 44717 bytes
 assets/hive-run.6aa39a3f.png                           | Bin 0 -> 31403 bytes
 assets/hive.c59e195d.js                                |   1 +
 .../{HowToUse.212b1469.js => how_to_use.24a56e5f.js}   |   2 +-
 assets/index.11bb1268.js                               |   1 +
 assets/index.187b32e3.js                               |   1 +
 assets/index.2b54ad83.css                              |   1 +
 assets/index.2da1dc18.js                               |   1 -
 assets/{index.c93f08c9.js => index.491f620b.js}        |   2 +-
 assets/index.5a6d4e60.js                               |   1 -
 assets/index.6baed6d3.css                              |   1 +
 assets/index.77f4f836.css                              |   1 -
 assets/index.82f016e4.css                              |   1 -
 assets/index.8d1f9740.js                               |   1 -
 assets/index.97098d19.js                               |   1 +
 assets/index.9c41b9ea.js                               |   1 +
 assets/index.9fb4d9d9.js                               |   1 +
 assets/index.b0fb8393.js                               |   1 +
 assets/index.ba4cbe23.js                               |   1 +
 assets/index.c319b82e.js                               |   1 +
 assets/index.c51fb506.js                               |   1 -
 assets/index.c935709d.js                               |   1 +
 assets/{main.3104c8a7.js => index.cd1b8a2e.js}         |   2 +-
 assets/jdbc-conf.7cf06ba9.js                           |   1 +
 assets/jdbc-conf.9520dcb1.png                          | Bin 0 -> 46113 bytes
 assets/jdbc-run.b39db252.png                           | Bin 0 -> 21937 bytes
 assets/jdbc.4fc1629f.js                                |   1 +
 ...bmission.cf4b12e7.js => job_submission.5703dc56.js} |   2 +-
 assets/label-manager-01.530390e5.png                   | Bin 0 -> 39221 bytes
 assets/label_manager.6b95dcc1.js                       |   1 +
 assets/label_manager_builder.caf90f90.png              | Bin 0 -> 62978 bytes
 assets/label_manager_global.91aa80e7.png               | Bin 0 -> 14988 bytes
 assets/label_manager_scorer.fd531e4a.png               | Bin 0 -> 72977 bytes
 assets/linkis-computation-gov-01.6035615d.png          | Bin 0 -> 89527 bytes
 assets/linkis-computation-gov-02.43fad13f.png          | Bin 0 -> 179368 bytes
 assets/linkis-contextservice-01.3cb67fd1.png           | Bin 0 -> 9188 bytes
 assets/linkis-contextservice-02.321a8427.png           | Bin 0 -> 4953 bytes
 assets/linkis-engineconn-plugin-01.ca85467f.png        | Bin 0 -> 21864 bytes
 assets/linkis-intro-01.71fb2144.png                    | Bin 0 -> 413878 bytes
 assets/linkis-intro-03.65d1a7b1.png                    | Bin 0 -> 738141 bytes
 assets/linkis-manager-01.fb5e443a.png                  | Bin 0 -> 183082 bytes
 assets/linkis-microservice-gov-01.2e1292b0.png         | Bin 0 -> 46380 bytes
 assets/linkis-microservice-gov-03.9ece64b6.png         | Bin 0 -> 30388 bytes
 assets/linkis-publicservice-01.bc9338bf.png            | Bin 0 -> 25269 bytes
 assets/linkis.cdbb993f.js                              |   1 +
 assets/linkis.d0790396.js                              |   1 -
 .../{CliManual.8440dc3f.js => linkis_cli.56d856c4.js}  |   2 +-
 assets/logo.fb11029b.png                               | Bin 9114 -> 0 bytes
 assets/manager.6973d707.js                             |   1 +
 assets/microservice_governance.e72bfd46.js             |   1 +
 assets/mobtech.b333dc91.png                            | Bin 11676 -> 0 bytes
 assets/mobtech.e2567e09.png                            | Bin 0 -> 18229 bytes
 assets/orchestration.e1c8bd97.png                      | Bin 0 -> 4545 bytes
 assets/public-enhencement-architecture.6597436f.png    | Bin 0 -> 24844 bytes
 assets/public_enhancement.626e701e.js                  |   1 +
 assets/public_service.8f4dd101.js                      |   1 +
 assets/pyspakr-run.9c36d9ef.png                        | Bin 0 -> 43552 bytes
 assets/python-run.25fd075c.png                         | Bin 0 -> 61451 bytes
 assets/python.17efbf15.js                              |   1 +
 assets/queue-set.3007a0ca.png                          | Bin 0 -> 41298 bytes
 assets/resource-manager-01.86e09124.png                | Bin 0 -> 71086 bytes
 assets/resource_manager.ce0e10f4.js                    |   1 +
 assets/rm-03.8382829b.png                              | Bin 0 -> 52466 bytes
 assets/rm-04.2385c2db.png                              | Bin 0 -> 36324 bytes
 assets/rm-05.347294cd.png                              | Bin 0 -> 34066 bytes
 assets/rm-06.dde9d64d.png                              | Bin 0 -> 44105 bytes
 assets/scala-run.62f19952.png                          | Bin 0 -> 43959 bytes
 assets/searching_keywords.41a60149.png                 | Bin 0 -> 53652 bytes
 assets/shell-run.6a5566b5.png                          | Bin 0 -> 100312 bytes
 assets/shell.06015d78.js                               |   1 +
 assets/spark-conf.9e59a279.png                         | Bin 0 -> 53397 bytes
 assets/spark.e086b785.js                               |   1 +
 .../{structure.1bc4dbfc.js => structure.2309b7ab.js}   |   2 +-
 assets/{team.13ce5e55.css => team.04f1ab61.css}        |   2 +-
 assets/team.c0178c87.js                                |   1 -
 assets/team.e10d896f.js                                |   1 +
 assets/tuning.45470047.js                              |   1 +
 assets/{UserManual.905b8e9a.js => user.4c9df01e.js}    |   2 +-
 assets/{vendor.12a5b039.js => vendor.1180558b.js}      |  10 +++++-----
 assets/wedatasphere_contact_01.ce92bdb6.png            | Bin 0 -> 217762 bytes
 assets/wedatasphere_stack_Linkis.efef3aa3.png          | Bin 0 -> 203466 bytes
 assets/workflow.72652f4e.js                            |   1 +
 .../\344\270\234\346\226\271\351\200\232.4814e53c.png" | Bin 33873 -> 0 bytes
 .../\344\270\234\346\226\271\351\200\232.b2758d5e.png" | Bin 0 -> 6504 bytes
 ...3\345\275\251\347\247\221\346\212\200.d1ffcc7d.png" | Bin 31958 -> 0 bytes
 ...3\345\275\251\347\247\221\346\212\200.f0458dd2.png" | Bin 0 -> 6279 bytes
 ...5\345\233\275\347\224\265\347\247\221.5bf9bcd0.png" | Bin 0 -> 8258 bytes
 ...5\345\233\275\347\224\265\347\247\221.864feafc.jpg" | Bin 5955 -> 0 bytes
 ...2\344\277\241\346\234\215\345\212\241.6242b949.png" | Bin 13177 -> 0 bytes
 ...2\344\277\241\346\234\215\345\212\241.de1dbff8.png" | Bin 0 -> 25306 bytes
 ...5\351\200\232\344\272\221\344\273\223.a785e23f.png" | Bin 20138 -> 0 bytes
 ...5\351\200\232\344\272\221\344\273\223.c02b68a5.png" | Bin 0 -> 25395 bytes
 ...7\345\256\236\351\252\214\345\256\244.46d52eec.png" | Bin 11054 -> 0 bytes
 ...7\345\256\236\351\252\214\345\256\244.657671b0.png" | Bin 0 -> 29997 bytes
 ...1\345\276\222\347\247\221\346\212\200.d6b063f3.png" | Bin 35242 -> 0 bytes
 ...1\345\276\222\347\247\221\346\212\200.e101f4b2.png" | Bin 0 -> 9693 bytes
 "assets/\344\276\235\345\233\276.c76de0a6.png"         | Bin 0 -> 5467 bytes
 "assets/\344\276\235\345\233\276.e1935876.png"         | Bin 41437 -> 0 bytes
 ...1\347\224\250\347\224\237\346\264\273.bce0bb69.png" | Bin 0 -> 5910 bytes
 ...1\346\212\200\345\244\247\345\255\246.79502b9d.jpg" | Bin 12673 -> 0 bytes
 ...1\346\212\200\345\244\247\345\255\246.fcf29603.png" | Bin 0 -> 47926 bytes
 .../\345\223\227\345\225\246\345\225\246.045c3b9e.jpg" | Bin 5990 -> 0 bytes
 .../\345\223\227\345\225\246\345\225\246.2eef0fe4.png" | Bin 0 -> 26929 bytes
 ...0\345\244\226\345\220\214\345\255\246.2bb21f07.png" | Bin 0 -> 12193 bytes
 ...0\345\244\226\345\220\214\345\255\246.9c81d026.png" | Bin 8081 -> 0 bytes
 .../\345\244\251\347\277\274\344\272\221.719b17b2.png" | Bin 0 -> 9317 bytes
 .../\345\244\251\347\277\274\344\272\221.ee336756.png" | Bin 39592 -> 0 bytes
 "assets/\345\271\263\345\256\211.1f145bbc.png"         | Bin 0 -> 7990 bytes
 "assets/\345\271\263\345\256\211.d0212a59.png"         | Bin 20795 -> 0 bytes
 ...5\345\244\247\346\225\260\346\215\256.3da8e88f.png" | Bin 0 -> 24074 bytes
 ...5\345\244\247\346\225\260\346\215\256.d21c18fc.png" | Bin 7862 -> 0 bytes
 ...1\351\231\220\345\205\254\345\217\270.66cf4318.png" | Bin 29500 -> 0 bytes
 ...1\351\231\220\345\205\254\345\217\270.903c953e.png" | Bin 0 -> 9162 bytes
 ...5\351\255\202\347\275\221\347\273\234.3ec071b8.png" | Bin 5553 -> 0 bytes
 ...4\345\255\220\345\210\206\346\234\237.55aa406b.png" | Bin 6968 -> 0 bytes
 ...4\345\255\220\345\210\206\346\234\237.f980f03b.png" | Bin 0 -> 21206 bytes
 ...7\345\272\267\345\250\201\350\247\206.70f8122b.png" | Bin 22412 -> 0 bytes
 ...7\345\272\267\345\250\201\350\247\206.fb60f896.png" | Bin 0 -> 8505 bytes
 ...6\346\203\263\346\261\275\350\275\246.0123a918.png" | Bin 27672 -> 0 bytes
 ...6\346\203\263\346\261\275\350\275\246.c5e2739b.png" | Bin 0 -> 6895 bytes
 .../\347\231\276\346\234\233\344\272\221.77c04429.png" | Bin 0 -> 6790 bytes
 .../\347\231\276\346\234\233\344\272\221.c2c1293f.png" | Bin 24473 -> 0 bytes
 ...3\345\210\233\345\225\206\345\237\216.294fde8b.png" | Bin 24213 -> 0 bytes
 ...3\345\210\233\345\225\206\345\237\216.7f44a468.png" | Bin 0 -> 49107 bytes
 ...2\350\261\241\344\272\221\350\205\276.7417b5e6.png" | Bin 4596 -> 0 bytes
 ...2\350\261\241\344\272\221\350\205\276.929a5839.png" | Bin 0 -> 4757 bytes
 ...4\345\210\233\346\231\272\350\236\215.188edcec.png" | Bin 11438 -> 0 bytes
 ...4\345\210\233\346\231\272\350\236\215.808a8eaa.png" | Bin 0 -> 10382 bytes
 ...2\345\244\251\344\277\241\346\201\257.23b0d23c.png" | Bin 46944 -> 0 bytes
 ...2\345\244\251\344\277\241\346\201\257.e12022d3.png" | Bin 0 -> 11949 bytes
 ...6\344\275\263\347\224\237\346\264\273.26403b56.png" | Bin 0 -> 18851 bytes
 ...6\344\275\263\347\224\237\346\264\273.b508c1dc.jpg" | Bin 5444 -> 0 bytes
 "assets/\350\215\243\350\200\200.5a89cf66.png"         | Bin 0 -> 4898 bytes
 "assets/\350\215\243\350\200\200.ceda8b1e.png"         | Bin 7780 -> 0 bytes
 ...0\346\221\251\350\200\266\344\272\221.36d45d17.png" | Bin 0 -> 26898 bytes
 ...0\346\221\251\350\200\266\344\272\221.63ed5828.png" | Bin 19705 -> 0 bytes
 ...2\346\235\245\346\261\275\350\275\246.422c536e.png" | Bin 0 -> 7464 bytes
 ...2\346\235\245\346\261\275\350\275\246.be672a01.jpg" | Bin 7034 -> 0 bytes
 ...1\346\212\200\345\244\247\345\255\246.3762b76e.jpg" | Bin 12475 -> 0 bytes
 ...1\346\212\200\345\244\247\345\255\246.b4ea0700.png" | Bin 0 -> 10138 bytes
 ...6\347\202\271\350\275\257\344\273\266.389df8d5.png" | Bin 8796 -> 0 bytes
 ...6\347\202\271\350\275\257\344\273\266.e6044237.png" | Bin 0 -> 11299 bytes
 index.html                                             |   7 ++++---
 203 files changed, 72 insertions(+), 35 deletions(-)

diff --git a/.asf.yaml b/.asf.yaml
index 9301abb..87562ad 100644
--- a/.asf.yaml
+++ b/.asf.yaml
@@ -1,4 +1,3 @@
-#
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
 # this work for additional information regarding copyright ownership.
@@ -17,7 +16,7 @@
 
 github:
   description: Apache Linkis documents
-  homepage: https://linkis.staged.apache.org/
+  homepage: https://linkis.apache.org/
   labels:
     - linkis
     - website
@@ -25,4 +24,8 @@ github:
 # If this branch is asf-staging, it will be published to https://linkis.staged.apache.org/
 staging:
   profile: ~
-  whoami:  asf-staging
\ No newline at end of file
+  whoami:  asf-staging
+
+# asf-site branch will show up at https://linkis.apache.org
+publish:
+  whoami:  asf-site
\ No newline at end of file
diff --git a/assets/360.bc39c47a.png b/assets/360.bc39c47a.png
deleted file mode 100644
index 74b5d13..0000000
Binary files a/assets/360.bc39c47a.png and /dev/null differ
diff --git a/assets/360.cd40bc4b.png b/assets/360.cd40bc4b.png
new file mode 100644
index 0000000..460c54e
Binary files /dev/null and b/assets/360.cd40bc4b.png differ
diff --git a/assets/404.f24f37c0.js b/assets/404.f24f37c0.js
new file mode 100644
index 0000000..db61db0
--- /dev/null
+++ b/assets/404.f24f37c0.js
@@ -0,0 +1 @@
+import{r as l,o as n,c as a,b as u}from"./vendor.1180558b.js";const o={key:0,class:"ctn-block normal-page"},e=[u("h1",null,"Sorry,Page Not Found!!!",-1),u("br",null,null,-1),u("p",null,"You can contact us via email(dev@linkis.incubator.apache.org) or submitting an issue on github",-1),u("br",null,null,-1)],r={key:1,class:"ctn-block normal-page"},s=[u("h1",null,"抱歉,请求的资源未找到!!!",-1),u("br",null,null,-1),u("p",null,"您可以通过邮件(dev@linkis.incubator.apache.org)告知我们或则通过github提交issue.",-1),u("br", [...]
diff --git "a/assets/97\347\211\251\350\201\224.159781fb.png" "b/assets/97\347\211\251\350\201\224.159781fb.png"
new file mode 100644
index 0000000..c7c50f3
Binary files /dev/null and "b/assets/97\347\211\251\350\201\224.159781fb.png" differ
diff --git "a/assets/97\347\211\251\350\201\224.2447251c.png" "b/assets/97\347\211\251\350\201\224.2447251c.png"
deleted file mode 100644
index 5b828b1..0000000
Binary files "a/assets/97\347\211\251\350\201\224.2447251c.png" and /dev/null differ
diff --git a/assets/AddEngineConn.467c2210.js b/assets/AddEngineConn.467c2210.js
deleted file mode 100644
index 9d452f8..0000000
--- a/assets/AddEngineConn.467c2210.js
+++ /dev/null
@@ -1 +0,0 @@
-import{o as n,c as e,b as i,e as t,r as l,l as a,u as o}from"./vendor.12a5b039.js";var r="/assets/add_an_engineConn_flow_chart.d10a8d14.png";const s={class:"markdown-body"},u=[i("h1",null,"How to add an EngineConn",-1),i("p",null,"Adding EngineConn is one of the core processes of the computing task preparation phase of Linkis computing governance. It mainly includes the following steps. First, client side (Entrance or user client) initiates a request for a new EngineConn to LinkisManager [...]
diff --git a/assets/ECM-01.bb056ebe.png b/assets/ECM-01.bb056ebe.png
new file mode 100644
index 0000000..cc83842
Binary files /dev/null and b/assets/ECM-01.bb056ebe.png differ
diff --git a/assets/ECM-02.a90e3890.png b/assets/ECM-02.a90e3890.png
new file mode 100644
index 0000000..303f37a
Binary files /dev/null and b/assets/ECM-02.a90e3890.png differ
diff --git a/assets/Linkis1.0-architecture.be03428f.png b/assets/Linkis1.0-architecture.be03428f.png
new file mode 100644
index 0000000..497e8fe
Binary files /dev/null and b/assets/Linkis1.0-architecture.be03428f.png differ
diff --git a/assets/Linkis_1.0_architecture.ba18dcdc.png b/assets/Linkis_1.0_architecture.ba18dcdc.png
new file mode 100644
index 0000000..9b6cc90
Binary files /dev/null and b/assets/Linkis_1.0_architecture.ba18dcdc.png differ
diff --git "a/assets/T3\345\207\272\350\241\214.1738b528.png" "b/assets/T3\345\207\272\350\241\214.1738b528.png"
deleted file mode 100644
index f245038..0000000
Binary files "a/assets/T3\345\207\272\350\241\214.1738b528.png" and /dev/null differ
diff --git "a/assets/T3\345\207\272\350\241\214.9d8b64de.png" "b/assets/T3\345\207\272\350\241\214.9d8b64de.png"
new file mode 100644
index 0000000..603a140
Binary files /dev/null and "b/assets/T3\345\207\272\350\241\214.9d8b64de.png" differ
diff --git a/assets/add_an_engineConn_flow_chart.5a1c06c5.js b/assets/add_an_engineConn_flow_chart.5a1c06c5.js
new file mode 100644
index 0000000..f1e6859
--- /dev/null
+++ b/assets/add_an_engineConn_flow_chart.5a1c06c5.js
@@ -0,0 +1 @@
+var a="/assets/add_an_engineConn_flow_chart.d10a8d14.png";export{a as _};
diff --git a/assets/add_engine.b12c7e06.js b/assets/add_engine.b12c7e06.js
new file mode 100644
index 0000000..cef8132
--- /dev/null
+++ b/assets/add_engine.b12c7e06.js
@@ -0,0 +1 @@
+import{_ as n}from"./add_an_engineConn_flow_chart.5a1c06c5.js";import{o as e,c as i,b as t,e as l,r as a,l as o,u as r}from"./vendor.1180558b.js";const s={class:"markdown-body"},u=[t("h1",null,"How to add an EngineConn",-1),t("p",null,"Adding EngineConn is one of the core processes of the computing task preparation phase of Linkis computing governance. It mainly includes the following steps. First, client side (Entrance or user client) initiates a request for a new EngineConn to LinkisMa [...]
diff --git a/assets/after_linkis_bg.31ad71dc.png b/assets/after_linkis_bg.31ad71dc.png
new file mode 100644
index 0000000..8f8669a
Binary files /dev/null and b/assets/after_linkis_bg.31ad71dc.png differ
diff --git a/assets/after_linkis_cn.f311973b.png b/assets/after_linkis_cn.f311973b.png
new file mode 100644
index 0000000..b94beab
Binary files /dev/null and b/assets/after_linkis_cn.f311973b.png differ
diff --git a/assets/after_linkis_en.c3ed71bf.png b/assets/after_linkis_en.c3ed71bf.png
deleted file mode 100644
index 1daacf8..0000000
Binary files a/assets/after_linkis_en.c3ed71bf.png and /dev/null differ
diff --git a/assets/after_linkis_en.eafe79c9.png b/assets/after_linkis_en.eafe79c9.png
new file mode 100644
index 0000000..b7f39ec
Binary files /dev/null and b/assets/after_linkis_en.eafe79c9.png differ
diff --git a/assets/after_linkis_zh.bf948a76.png b/assets/after_linkis_zh.bf948a76.png
new file mode 100644
index 0000000..3d5d7a1
Binary files /dev/null and b/assets/after_linkis_zh.bf948a76.png differ
diff --git a/assets/app-manager-02.2aff8a98.png b/assets/app-manager-02.2aff8a98.png
new file mode 100644
index 0000000..858fbf2
Binary files /dev/null and b/assets/app-manager-02.2aff8a98.png differ
diff --git a/assets/app-manager-03.5aaff6ed.png b/assets/app-manager-03.5aaff6ed.png
new file mode 100644
index 0000000..8f33259
Binary files /dev/null and b/assets/app-manager-03.5aaff6ed.png differ
diff --git a/assets/app_manager.bed25273.js b/assets/app_manager.bed25273.js
new file mode 100644
index 0000000..0c1d272
--- /dev/null
+++ b/assets/app_manager.bed25273.js
@@ -0,0 +1 @@
+import{o as e,c as n,b as i,e as a,r as t,l as r,u as o}from"./vendor.1180558b.js";var s="/assets/app-manager-03.5aaff6ed.png",g="/assets/app-manager-02.2aff8a98.png";const l={class:"markdown-body"},c=[i("h2",null,"1. Background",-1),i("p",null,"        The Entrance module of the old version of Linkis is responsible for too much responsibilities, the management ability of the Engine is weak, and it is not easy to follow-up expansion, the AppManager module is newly extracted to complete t [...]
diff --git a/assets/banner_bg.b3665793.png b/assets/banner_bg.b3665793.png
new file mode 100644
index 0000000..3cda7cd
Binary files /dev/null and b/assets/banner_bg.b3665793.png differ
diff --git a/assets/before_linkis_cn.6c6e76e4.png b/assets/before_linkis_cn.6c6e76e4.png
new file mode 100644
index 0000000..914d38b
Binary files /dev/null and b/assets/before_linkis_cn.6c6e76e4.png differ
diff --git a/assets/before_linkis_en.076cf10c.png b/assets/before_linkis_en.076cf10c.png
deleted file mode 100644
index 7bdaf4a..0000000
Binary files a/assets/before_linkis_en.076cf10c.png and /dev/null differ
diff --git a/assets/before_linkis_en.58065890.png b/assets/before_linkis_en.58065890.png
new file mode 100644
index 0000000..e122650
Binary files /dev/null and b/assets/before_linkis_en.58065890.png differ
diff --git a/assets/before_linkis_zh.2ec86cff.png b/assets/before_linkis_zh.2ec86cff.png
new file mode 100644
index 0000000..1832b5f
Binary files /dev/null and b/assets/before_linkis_zh.2ec86cff.png differ
diff --git a/assets/bml-02.0eb3b26a.png b/assets/bml-02.0eb3b26a.png
new file mode 100644
index 0000000..fed79f7
Binary files /dev/null and b/assets/bml-02.0eb3b26a.png differ
diff --git a/assets/bml.59ba7d32.js b/assets/bml.59ba7d32.js
new file mode 100644
index 0000000..aeb6d95
--- /dev/null
+++ b/assets/bml.59ba7d32.js
@@ -0,0 +1 @@
+import{o as l,c as e,b as t,r as n,l as r,u}from"./vendor.1180558b.js";const o={class:"markdown-body"},s=[t("h2",null,"Background",-1),t("p",null,"BML (Material Library Service) is a material management system of linkis, which is mainly used to store various file data of users, including user scripts, resource files, third-party Jar packages, etc., and can also store class libraries that need to be used when the engine is running.",-1),t("p",null,"It has the following functions:",-1),t(" [...]
diff --git "a/assets/boss\347\233\264\350\201\230.5353720c.png" "b/assets/boss\347\233\264\350\201\230.5353720c.png"
deleted file mode 100644
index 17bb2b2..0000000
Binary files "a/assets/boss\347\233\264\350\201\230.5353720c.png" and /dev/null differ
diff --git a/assets/computation_governance.3a8ad59d.js b/assets/computation_governance.3a8ad59d.js
new file mode 100644
index 0000000..a1542a9
--- /dev/null
+++ b/assets/computation_governance.3a8ad59d.js
@@ -0,0 +1 @@
+import{o as n,c as e,b as i,e as a,r as l,l as r,u as t}from"./vendor.1180558b.js";var o="/assets/linkis-computation-gov-01.6035615d.png",s="/assets/linkis-computation-gov-02.43fad13f.png";const u={class:"markdown-body"},g=[i("h2",null,"Background",-1),i("p",null,[i("strong",null,"The architecture of Linkis0.X mainly has the following problems")],-1),i("ol",null,[i("li",null,"The boundary between the core processing flow and the hierarchical module is blurred:")],-1),i("ul",null,[i("li", [...]
diff --git a/assets/configuration.a2fe2e50.js b/assets/configuration.a2fe2e50.js
new file mode 100644
index 0000000..eb897fa
--- /dev/null
+++ b/assets/configuration.a2fe2e50.js
@@ -0,0 +1 @@
+import{o as t,c as e,m as i,r as n,l as s,u as d}from"./vendor.1180558b.js";const r={class:"markdown-body"},a=[i("<h1>Linkis1.0 Configurations</h1><blockquote><p>The configuration of Linkis1.0 is simplified on the basis of Linkis0.x. A public configuration file linkis.properties is provided in the conf directory to avoid the need for common configuration parameters to be configured in multiple microservices at the same time. This document will list the parameters of Linkis1.0 in modules. [...]
diff --git a/assets/connectivity.7ada0256.png b/assets/connectivity.7ada0256.png
new file mode 100644
index 0000000..15ba7a5
Binary files /dev/null and b/assets/connectivity.7ada0256.png differ
diff --git a/assets/ConsoleUserManual.d2af8060.js b/assets/console.ec03cad4.js
similarity index 99%
rename from assets/ConsoleUserManual.d2af8060.js
rename to assets/console.ec03cad4.js
index 05b335a..ed02d1b 100644
--- a/assets/ConsoleUserManual.d2af8060.js
+++ b/assets/console.ec03cad4.js
@@ -1 +1 @@
-import{o as e,c as l,b as n,e as t,r as a,l as i,u as o}from"./vendor.12a5b039.js";var u="/assets/global_history_interface.68d7d00e.png",s="/assets/global_history_query_button.c9058b17.png",r="/assets/task_execution_log_of_a_single_task.cf40fba8.png",c="/assets/administrator_view.7c4869c3.png",m="/assets/resource_management_interface.1334783f.png",p="/assets/parameter_configuration_interface.6160c166.png",g="/assets/edit_directory.410557fd.png",h="/assets/new_application_type.90ca0c6b.pn [...]
+import{o as e,c as l,b as n,e as t,r as a,l as i,u as o}from"./vendor.1180558b.js";var u="/assets/global_history_interface.68d7d00e.png",s="/assets/global_history_query_button.c9058b17.png",r="/assets/task_execution_log_of_a_single_task.cf40fba8.png",c="/assets/administrator_view.7c4869c3.png",m="/assets/resource_management_interface.1334783f.png",p="/assets/parameter_configuration_interface.6160c166.png",g="/assets/edit_directory.410557fd.png",h="/assets/new_application_type.90ca0c6b.pn [...]
diff --git a/assets/context_service.13b75bb1.js b/assets/context_service.13b75bb1.js
new file mode 100644
index 0000000..68a2d2e
--- /dev/null
+++ b/assets/context_service.13b75bb1.js
@@ -0,0 +1 @@
+import{o as e,c as l,b as t,e as n,r as a,l as i,u as o}from"./vendor.1180558b.js";var r="/assets/linkis-contextservice-01.3cb67fd1.png",s="/assets/linkis-contextservice-02.321a8427.png";const u={class:"markdown-body"},d=[t("h2",null,[t("strong",null,"Background")],-1),t("h3",null,[t("strong",null,"What is Context")],-1),t("p",null,"All necessary information to keep a certain operation going on. For example: reading three books at the same time, the page number of each book has been turn [...]
diff --git a/assets/contributing.e1c72372.js b/assets/contributing.e1c72372.js
new file mode 100644
index 0000000..21ffb34
--- /dev/null
+++ b/assets/contributing.e1c72372.js
@@ -0,0 +1 @@
+import{o as e,c as t,m as o,r as n,b as i,l as r,u as s}from"./vendor.1180558b.js";const a={class:"markdown-body"},c=[o("<h1>Contributing</h1><p>Thank you very much for contributing to the Linkis project! Before participating in the contribution, please read the following guidelines carefully.</p><h2>1. Contribution category</h2><h3>1.1 Bug feedback and fix</h3><p>We suggest that whether it is bug feedback or repair, you should create an issue first to describe the status of the bug in d [...]
diff --git a/assets/controllability.c2cb45d7.png b/assets/controllability.c2cb45d7.png
new file mode 100644
index 0000000..d6f31bc
Binary files /dev/null and b/assets/controllability.c2cb45d7.png differ
diff --git a/assets/datasource.d410aafc.js b/assets/datasource.d410aafc.js
new file mode 100644
index 0000000..7370e85
--- /dev/null
+++ b/assets/datasource.d410aafc.js
@@ -0,0 +1 @@
+import{o as e,c as o,b as s,r as t,l as a,u as r}from"./vendor.1180558b.js";const l={class:"markdown-body"},n=[s("p",null,"todo",-1)],d={setup:(s,{expose:t})=>(t({frontmatter:{}}),(s,t)=>(e(),o("div",l,n)))},p={class:"markdown-body"},u=[s("p",null,"待上传",-1)],c={setup:(s,{expose:t})=>(t({frontmatter:{}}),(s,t)=>(e(),o("div",p,u)))},m={setup(o){const s=t(localStorage.getItem("locale")||"en");return(o,t)=>"en"===s.value?(e(),a(r(d),{key:0})):(e(),a(r(c),{key:1}))}};export{m as default};
diff --git a/assets/description.95f7a296.png b/assets/description.95f7a296.png
deleted file mode 100644
index f86c34b..0000000
Binary files a/assets/description.95f7a296.png and /dev/null differ
diff --git a/assets/description.bee4d876.png b/assets/description.bee4d876.png
new file mode 100644
index 0000000..5847056
Binary files /dev/null and b/assets/description.bee4d876.png differ
diff --git a/assets/DifferenceBetween1.0&0.x.7e9c261e.js b/assets/difference.546832ac.js
similarity index 99%
rename from assets/DifferenceBetween1.0&0.x.7e9c261e.js
rename to assets/difference.546832ac.js
index 8eece2e..45231f9 100644
--- a/assets/DifferenceBetween1.0&0.x.7e9c261e.js
+++ b/assets/difference.546832ac.js
@@ -1 +1 @@
-import{o as n,c as e,b as i,e as a,r as t,l as s,u as l}from"./vendor.12a5b039.js";var r="/assets/Linkis0.X_services_list.984b5164.png",o="/assets/Linkis1.0_services_list.72702c4a.png",c="/assets/Linkis0.X_newengine_architecture.76e9d9b8.png",g="/assets/Linkis1.0_newengine_architecture.e98645d5.png",u="/assets/Linkis1.0_newengine_initialization.6acbb6c3.png",p="/assets/Linkis1.0_engineconn_architecture.7d420481.png";const d={class:"markdown-body"},m=[i("h2",null,"1. Brief Description",-1 [...]
+import{o as n,c as e,b as i,e as a,r as t,l as s,u as l}from"./vendor.1180558b.js";var r="/assets/Linkis0.X_services_list.984b5164.png",o="/assets/Linkis1.0_services_list.72702c4a.png",c="/assets/Linkis0.X_newengine_architecture.76e9d9b8.png",g="/assets/Linkis1.0_newengine_architecture.e98645d5.png",u="/assets/Linkis1.0_newengine_initialization.6acbb6c3.png",p="/assets/Linkis1.0_engineconn_architecture.7d420481.png";const d={class:"markdown-body"},m=[i("h2",null,"1. Brief Description",-1 [...]
diff --git a/assets/distributed.6a61f64e.js b/assets/distributed.89154171.js
similarity index 99%
rename from assets/distributed.6a61f64e.js
rename to assets/distributed.89154171.js
index a7e7c4f..0e16b87 100644
--- a/assets/distributed.6a61f64e.js
+++ b/assets/distributed.89154171.js
@@ -1 +1 @@
-import{o as e,c as t,m as n,b as l,e as r,r as a,l as o,u as s}from"./vendor.12a5b039.js";const i={class:"markdown-body"},u=[n("<h1>Introduction to Distributed Deployment Scheme</h1><p>Linkis’s stand-alone deployment is simple, but it cannot be used in a production environment, because too many processes on the same server will make the server too stressful. The choice of deployment plan is related to the company’s user scale, user habits, and the number of simultaneous users of the clus [...]
+import{o as e,c as t,m as n,b as l,e as r,r as a,l as o,u as s}from"./vendor.1180558b.js";const i={class:"markdown-body"},u=[n("<h1>Introduction to Distributed Deployment Scheme</h1><p>Linkis’s stand-alone deployment is simple, but it cannot be used in a production environment, because too many processes on the same server will make the server too stressful. The choice of deployment plan is related to the company’s user scale, user habits, and the number of simultaneous users of the clus [...]
diff --git a/assets/download.0330f828.css b/assets/download.0330f828.css
new file mode 100644
index 0000000..50e115e
--- /dev/null
+++ b/assets/download.0330f828.css
@@ -0,0 +1 @@
+.download-page .download-list[data-v-b5a3c158]{padding:40px 0}.download-page .download-list .download-item[data-v-b5a3c158]{position:relative;padding:30px;margin-bottom:20px;border:1px solid rgba(15,18,34,.2);border-radius:8px;font-size:16px}.download-page .download-list .download-item .item-title[data-v-b5a3c158]{display:flex;justify-content:space-between;font-size:24px;line-height:34px}.download-page .download-list .download-item .item-title .release-date[data-v-b5a3c158]{color:#0f1222 [...]
diff --git a/assets/download.4f121175.js b/assets/download.4f121175.js
new file mode 100644
index 0000000..c4ac319
--- /dev/null
+++ b/assets/download.4f121175.js
@@ -0,0 +1 @@
+import{u as e}from"./utils.7ca2fb6d.js";import{s}from"./index.c319b82e.js";import{_ as a}from"./plugin-vue_export-helper.5a098b48.js";import{o as n,c as t,b as l,F as i,k as o,p as r,j as c,t as h,e as m}from"./vendor.1180558b.js";const u={info:{desc:'Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in <a class="link" target="_blank" href="'+s.github.projectReleaseUrl+'">Github release page</a></p>'},list:[{version:"1.0.2",releaseDate:"2021 [...]
diff --git a/assets/download.8c6e40f3.css b/assets/download.8c6e40f3.css
deleted file mode 100644
index e3dd1c7..0000000
--- a/assets/download.8c6e40f3.css
+++ /dev/null
@@ -1 +0,0 @@
-.download-page .download-list[data-v-977dadbe]{padding:40px 0}.download-page .download-list .download-item[data-v-977dadbe]{position:relative;padding:30px;margin-bottom:20px;border:1px solid rgba(15,18,34,.2);border-radius:8px;font-size:16px}.download-page .download-list .download-item .item-title[data-v-977dadbe]{display:flex;justify-content:space-between;font-size:24px;line-height:34px}.download-page .download-list .download-item .item-title .release-date[data-v-977dadbe]{color:#0f1222 [...]
diff --git a/assets/download.c3e47cb5.js b/assets/download.c3e47cb5.js
deleted file mode 100644
index 747f4fa..0000000
--- a/assets/download.c3e47cb5.js
+++ /dev/null
@@ -1 +0,0 @@
-import{u as e}from"./utils.7ca2fb6d.js";import{s}from"./index.8d1f9740.js";import{_ as a}from"./plugin-vue_export-helper.5a098b48.js";import{o as n,c as l,b as t,F as o,k as i,p as c,j as r,t as m,e as u}from"./vendor.12a5b039.js";const d={info:{desc:'Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in <a class="desc-link" href="'+s.github.projectReleaseUrl+'">Github release page</a></p>'},list:[{version:"1.0.2",releaseDate:"2021-09-02",rel [...]
diff --git a/assets/engine_start_process.f86c8e8a.js b/assets/engine_start_process.f86c8e8a.js
new file mode 100644
index 0000000..cef8132
--- /dev/null
+++ b/assets/engine_start_process.f86c8e8a.js
@@ -0,0 +1 @@
+import{_ as n}from"./add_an_engineConn_flow_chart.5a1c06c5.js";import{o as e,c as i,b as t,e as l,r as a,l as o,u as r}from"./vendor.1180558b.js";const s={class:"markdown-body"},u=[t("h1",null,"How to add an EngineConn",-1),t("p",null,"Adding EngineConn is one of the core processes of the computing task preparation phase of Linkis computing governance. It mainly includes the following steps. First, client side (Entrance or user client) initiates a request for a new EngineConn to LinkisMa [...]
diff --git a/assets/engineconn-01.b4d20b76.png b/assets/engineconn-01.b4d20b76.png
new file mode 100644
index 0000000..d95da89
Binary files /dev/null and b/assets/engineconn-01.b4d20b76.png differ
diff --git a/assets/engineconn.efe3f534.js b/assets/engineconn.efe3f534.js
new file mode 100644
index 0000000..b1ed8c2
--- /dev/null
+++ b/assets/engineconn.efe3f534.js
@@ -0,0 +1 @@
+import{o as n,c as l,b as e,r as t,l as u,u as o}from"./vendor.1180558b.js";var i="/assets/engineconn-01.b4d20b76.png";const r={class:"markdown-body"},c=[e("h1",null,"EngineConn architecture design",-1),e("p",null,"EngineConn: Engine connector, a module that provides functions such as unified configuration management, context service, physical library, data source management, micro service management, and historical task query for other micro service modules.",-1),e("p",null,"EngineConn  [...]
diff --git a/assets/engineconn_manager.563abdf4.js b/assets/engineconn_manager.563abdf4.js
new file mode 100644
index 0000000..a149dbe
--- /dev/null
+++ b/assets/engineconn_manager.563abdf4.js
@@ -0,0 +1 @@
+import{o as n,c as l,b as e,r as t,l as r,u as o}from"./vendor.1180558b.js";const i={class:"markdown-body"},u=[e("h2",null,"EngineConnManager architecture design",-1),e("p",null,"EngineConnManager (ECM): EngineConn’s manager, provides engine lifecycle management, and reports load information and its own health status to RM.",-1),e("h3",null,"ECM architecture",-1),e("p",null,[e("img",{src:"/assets/ECM-01.bb056ebe.png",alt:""})],-1),e("h3",null,"Introduction to the second-level module",-1) [...]
diff --git a/assets/engineconn_plugin.0c1c8f49.js b/assets/engineconn_plugin.0c1c8f49.js
new file mode 100644
index 0000000..9f06ee6
--- /dev/null
+++ b/assets/engineconn_plugin.0c1c8f49.js
@@ -0,0 +1 @@
+import{o as n,c as e,b as l,r as t,l as o,u as i}from"./vendor.1180558b.js";var u="/assets/linkis-engineconn-plugin-01.ca85467f.png";const r={class:"markdown-body"},a=[l("h1",null,"EngineConnPlugin (ECP) architecture design",-1),l("p",null,"The engine connector plug-in is an implementation that can dynamically load the engine connector and reduce the occurrence of version conflicts. It has the characteristics of convenient expansion, fast refresh, and selective loading. In order to allow [...]
diff --git a/assets/engins.2a41b1a0.js b/assets/engins.a82546f2.js
similarity index 99%
rename from assets/engins.2a41b1a0.js
rename to assets/engins.a82546f2.js
index dcd9e64..ba8f9da 100644
--- a/assets/engins.2a41b1a0.js
+++ b/assets/engins.a82546f2.js
@@ -1 +1 @@
-import{o as n,c as e,m as i,r as o,l as t,u as r}from"./vendor.12a5b039.js";const s={class:"markdown-body"},a=[i('<h1>EngineConnPlugin installation document</h1><p>This article mainly introduces the use of Linkis EngineConnPlugins, mainly from the aspects of compilation and installation.</p><h2>1. Compilation and packaging of EngineConnPlugins</h2><p>After linkis1.0, the engine is managed by EngineConnManager, and the EngineConnPlugin (ECP) supports real-time effectiveness. In order to f [...]
+import{o as n,c as e,m as i,r as o,l as t,u as r}from"./vendor.1180558b.js";const s={class:"markdown-body"},a=[i('<h1>EngineConnPlugin installation document</h1><p>This article mainly introduces the use of Linkis EngineConnPlugins, mainly from the aspects of compilation and installation.</p><h2>1. Compilation and packaging of EngineConnPlugins</h2><p>After linkis1.0, the engine is managed by EngineConnManager, and the EngineConnPlugin (ECP) supports real-time effectiveness. In order to f [...]
diff --git a/assets/event.29571be3.js b/assets/event.29571be3.js
deleted file mode 100644
index fc9755c..0000000
--- a/assets/event.29571be3.js
+++ /dev/null
@@ -1 +0,0 @@
-import{_ as o}from"./index.8d1f9740.js";import{_ as n}from"./plugin-vue_export-helper.5a098b48.js";import{q as a,a as e,o as t,c as s,b as r,l as c,s as i,d as l,v as m}from"./vendor.12a5b039.js";const d={class:"ctn-block reading-area blog-ctn"},p={class:"main-content"},u={class:"main-content"};var _=n({computed:{optionComponent(){const n="./"+this.$route.query.id+"_"+("en"==localStorage.getItem("locale")?"en":"zh")+".md";return console.log(n),a((()=>o((()=>import(n)),[])))}}},[["render" [...]
diff --git a/assets/event.b677bf34.js b/assets/event.b677bf34.js
new file mode 100644
index 0000000..055b87e
--- /dev/null
+++ b/assets/event.b677bf34.js
@@ -0,0 +1 @@
+import{_ as o}from"./index.c319b82e.js";import{_ as e}from"./plugin-vue_export-helper.5a098b48.js";import{q as n,o as t,c as r,b as a,l as s,s as c}from"./vendor.1180558b.js";const i={class:"ctn-block reading-area blog-ctn"},l={class:"main-content"};var m=e({computed:{optionComponent(){const e="./"+this.$route.query.id+"_"+("en"==localStorage.getItem("locale")?"en":"zh")+".md";return console.log(e),n((()=>o((()=>import(e)),[])))}}},[["render",function(o,e,n,m,p,d){return t(),r("div",i,[a [...]
diff --git a/assets/features_bg.2b28bb9d.png b/assets/features_bg.2b28bb9d.png
new file mode 100644
index 0000000..b4615e0
Binary files /dev/null and b/assets/features_bg.2b28bb9d.png differ
diff --git a/assets/gateway.b29c03a6.js b/assets/gateway.b29c03a6.js
new file mode 100644
index 0000000..e145e6d
--- /dev/null
+++ b/assets/gateway.b29c03a6.js
@@ -0,0 +1 @@
+import{o as e,c as t,b as a,e as n,r,l as i,u as o}from"./vendor.1180558b.js";var l="/assets/gateway_server_global.9fae8e50.png",s="/assets/gateway_server_dispatcher.d2241ca2.png",c="/assets/gatway_websocket.3d3c7dfa.png";const u={class:"markdown-body"},d=[a("h2",null,"Gateway Architecture Design",-1),a("h4",null,"Brief",-1),a("p",null,"The Gateway is the primary entry point for Linkis to accept client and external requests, such as receiving job execution requests, and then forwarding t [...]
diff --git a/assets/gateway_server_dispatcher.d2241ca2.png b/assets/gateway_server_dispatcher.d2241ca2.png
new file mode 100644
index 0000000..8d182f3
Binary files /dev/null and b/assets/gateway_server_dispatcher.d2241ca2.png differ
diff --git a/assets/gateway_server_global.9fae8e50.png b/assets/gateway_server_global.9fae8e50.png
new file mode 100644
index 0000000..f0f468a
Binary files /dev/null and b/assets/gateway_server_global.9fae8e50.png differ
diff --git a/assets/gatway_websocket.3d3c7dfa.png b/assets/gatway_websocket.3d3c7dfa.png
new file mode 100644
index 0000000..0144416
Binary files /dev/null and b/assets/gatway_websocket.3d3c7dfa.png differ
diff --git a/assets/hive-config.b2dec89f.png b/assets/hive-config.b2dec89f.png
new file mode 100644
index 0000000..7f1bcfe
Binary files /dev/null and b/assets/hive-config.b2dec89f.png differ
diff --git a/assets/hive-run.6aa39a3f.png b/assets/hive-run.6aa39a3f.png
new file mode 100644
index 0000000..7aca9b3
Binary files /dev/null and b/assets/hive-run.6aa39a3f.png differ
diff --git a/assets/hive.c59e195d.js b/assets/hive.c59e195d.js
new file mode 100644
index 0000000..9f1366a
--- /dev/null
+++ b/assets/hive.c59e195d.js
@@ -0,0 +1 @@
+import{_ as n,a as e}from"./workflow.72652f4e.js";import{o as i,c as l,b as t,e as u,r as o,l as a,u as h}from"./vendor.1180558b.js";var s="/assets/hive-config.b2dec89f.png";const r={class:"markdown-body"},c=[t("h1",null,"Hive engine usage documentation",-1),t("p",null,"This article mainly introduces the configuration, deployment and use of Hive engine in Linkis1.0.",-1),t("h2",null,"1. Environment configuration before Hive engine use",-1),t("p",null,"If you want to use the hive engine o [...]
diff --git a/assets/HowToUse.212b1469.js b/assets/how_to_use.24a56e5f.js
similarity index 99%
rename from assets/HowToUse.212b1469.js
rename to assets/how_to_use.24a56e5f.js
index fc4505d..eca617f 100644
--- a/assets/HowToUse.212b1469.js
+++ b/assets/how_to_use.24a56e5f.js
@@ -1 +1 @@
-import{o as i,c as e,b as s,e as t,r as n,l as a,u as o}from"./vendor.12a5b039.js";var r="/assets/sparksql_run.115bb5a7.png";const l={class:"markdown-body"},u=[s("h1",null,"How to use Linkis?",-1),s("p",null,"        In order to meet the needs of different usage scenarios, Linkis provides a variety of usage and access methods, which can be summarized into three categories, namely Client-side use, Scriptis-side use, and DataSphere It is used on the Studio side, among which Scriptis and Da [...]
+import{o as i,c as e,b as s,e as t,r as n,l as a,u as o}from"./vendor.1180558b.js";var r="/assets/sparksql_run.115bb5a7.png";const l={class:"markdown-body"},u=[s("h1",null,"How to use Linkis?",-1),s("p",null,"        In order to meet the needs of different usage scenarios, Linkis provides a variety of usage and access methods, which can be summarized into three categories, namely Client-side use, Scriptis-side use, and DataSphere It is used on the Studio side, among which Scriptis and Da [...]
diff --git a/assets/index.11bb1268.js b/assets/index.11bb1268.js
new file mode 100644
index 0000000..a8bf971
--- /dev/null
+++ b/assets/index.11bb1268.js
@@ -0,0 +1 @@
+import{o as e,c as i,b as n,e as l,r as s,l as r,u as a}from"./vendor.1180558b.js";var c="/assets/Linkis1.0-architecture.be03428f.png";const o={class:"markdown-body"},t=[n("h2",null,"1. Document Structure",-1),n("p",null,"Linkis 1.0 divides all microservices into three categories: public enhancement services, computing governance services, and microservice governance services. The following figure shows the architecture of Linkis 1.0.",-1),n("p",null,[n("img",{src:c,alt:"Linkis1.0 Archit [...]
diff --git a/assets/index.187b32e3.js b/assets/index.187b32e3.js
new file mode 100644
index 0000000..0cfd150
--- /dev/null
+++ b/assets/index.187b32e3.js
@@ -0,0 +1 @@
+import{o as t,c as e,b as l,e as n,r as i,l as a,u as s}from"./vendor.1180558b.js";var o="/assets/Linkis_1.0_architecture.ba18dcdc.png",r="/assets/wedatasphere_stack_Linkis.efef3aa3.png",u="/assets/wedatasphere_contact_01.ce92bdb6.png";const c={class:"markdown-body"},g=[l("h1",null,"Introduction",-1),l("p",null,"Linkis builds a layer of computation middleware between upper applications and underlying engines. By using standard interfaces such as REST/WS/JDBC provided by Linkis, the upper [...]
diff --git a/assets/index.2b54ad83.css b/assets/index.2b54ad83.css
new file mode 100644
index 0000000..3b05c13
--- /dev/null
+++ b/assets/index.2b54ad83.css
@@ -0,0 +1 @@
+*{box-sizing:border-box}body,ul,li,ol,h1,h2,h3,h4,h5,h6,p{margin:0;padding:0}body{font-size:14px;color:#4a4a4a;line-height:26px;background:#ffffff}ul,li,ol{list-style:none}a{text-decoration:none}a:link,a:visited{color:#0f1222}.ctn-block{width:1200px;padding:0 20px;margin:0 auto}.text-center{text-align:center}.reading-area{display:flex;padding:60px 0;min-height:600px}.reading-area .main-content{width:900px;padding:30px}.reading-area .side-bar{flex:1;padding:18px 0;border-left:1px solid #e [...]
diff --git a/assets/index.2da1dc18.js b/assets/index.2da1dc18.js
deleted file mode 100644
index e255461..0000000
--- a/assets/index.2da1dc18.js
+++ /dev/null
@@ -1 +0,0 @@
-import{u as e}from"./utils.7ca2fb6d.js";import{_ as i}from"./plugin-vue_export-helper.5a098b48.js";import{a as n,o as l,c as t,b as s,d as a,F as o,k as c,l as r,w as d,e as u,t as k}from"./vendor.12a5b039.js";const m={info:{},list:[{title:"Deployment",link:"/docs/deploy/linkis",children:[{title:"Quick Deploy",link:"/docs/deploy/linkis"},{title:"EngineConnPlugin installation",link:"/docs/deploy/engins"},{title:"Cluster Deployment",link:"/docs/deploy/distributed"},{title:"Installation Hie [...]
diff --git a/assets/index.c93f08c9.js b/assets/index.491f620b.js
similarity index 97%
rename from assets/index.c93f08c9.js
rename to assets/index.491f620b.js
index bce4957..13b75fc 100644
--- a/assets/index.c93f08c9.js
+++ b/assets/index.491f620b.js
@@ -1 +1 @@
-import{_ as e}from"./plugin-vue_export-helper.5a098b48.js";import{a,o as n,c as o,b as t,e as s,t as i,F as r,k as c,d as l,w as d,p as u,j as p}from"./vendor.12a5b039.js";const g=[{id:"AddEngineConn",title:"Born at China’s WeBank, now incubating in the ASF - Introducing Apache Linkis",author:"enjoyyin",createTime:"2021-10-14",summary:"Guangsheng Chen, the founder of Apache EventMesh, has been buzzing since the project was welcomed into the Apache Software Foundation (ASF)’s incubator in [...]
+import{_ as e}from"./plugin-vue_export-helper.5a098b48.js";import{a,o as n,c as o,b as t,e as s,t as i,F as r,k as c,d as l,w as d,p as u,j as p}from"./vendor.1180558b.js";const g=[{id:"AddEngineConn",title:"Born at China’s WeBank, now incubating in the ASF - Introducing Apache Linkis",author:"enjoyyin",createTime:"2021-10-14",summary:"Guangsheng Chen, the founder of Apache EventMesh, has been buzzing since the project was welcomed into the Apache Software Foundation (ASF)’s incubator in [...]
diff --git a/assets/index.5a6d4e60.js b/assets/index.5a6d4e60.js
deleted file mode 100644
index 8051f5e..0000000
--- a/assets/index.5a6d4e60.js
+++ /dev/null
@@ -1 +0,0 @@
-import{o as e,c as a,b as n,e as o,r,l as t,u as i}from"./vendor.12a5b039.js";var l="/assets/linkis-exception-01.a30b0cae.png",s="/assets/linkis-exception-02.c5d295a9.png",c="/assets/hive-config-01.e5d22d71.png",p="/assets/linkis-exception-03.8fc2f10f.png",u="/assets/page-show-01.f6ac5799.png",h="/assets/db-config-01.5aa0a782.png",d="/assets/linkis-exception-04.bb6736c1.png",g="/assets/shell-error-01.2e9d62b8.png",v="/assets/linkis-exception-05.9b7af564.png",k="/assets/page-show-02.9d59c [...]
diff --git a/assets/index.6baed6d3.css b/assets/index.6baed6d3.css
new file mode 100644
index 0000000..3eba442
--- /dev/null
+++ b/assets/index.6baed6d3.css
@@ -0,0 +1 @@
+*[data-v-39cd9a1d]{box-sizing:border-box}body[data-v-39cd9a1d],ul[data-v-39cd9a1d],li[data-v-39cd9a1d],ol[data-v-39cd9a1d],h1[data-v-39cd9a1d],h2[data-v-39cd9a1d],h3[data-v-39cd9a1d],h4[data-v-39cd9a1d],h5[data-v-39cd9a1d],h6[data-v-39cd9a1d],p[data-v-39cd9a1d]{margin:0;padding:0}body[data-v-39cd9a1d]{font-size:14px;color:#4a4a4a;line-height:26px;background:#ffffff}ul[data-v-39cd9a1d],li[data-v-39cd9a1d],ol[data-v-39cd9a1d]{list-style:none}a[data-v-39cd9a1d]{text-decoration:none}a[data-v [...]
diff --git a/assets/index.77f4f836.css b/assets/index.77f4f836.css
deleted file mode 100644
index a29654a..0000000
--- a/assets/index.77f4f836.css
+++ /dev/null
@@ -1 +0,0 @@
-*[data-v-16a56042]{box-sizing:border-box}body[data-v-16a56042],ul[data-v-16a56042],li[data-v-16a56042],ol[data-v-16a56042],h1[data-v-16a56042],h2[data-v-16a56042],h3[data-v-16a56042],h4[data-v-16a56042],h5[data-v-16a56042],h6[data-v-16a56042],p[data-v-16a56042]{margin:0;padding:0}body[data-v-16a56042]{font-size:14px;color:#4a4a4a;line-height:26px;background:#ffffff}ul[data-v-16a56042],li[data-v-16a56042],ol[data-v-16a56042]{list-style:none}a[data-v-16a56042]{text-decoration:none}a[data-v [...]
diff --git a/assets/index.82f016e4.css b/assets/index.82f016e4.css
deleted file mode 100644
index b4650cc..0000000
--- a/assets/index.82f016e4.css
+++ /dev/null
@@ -1 +0,0 @@
-*{box-sizing:border-box}body,ul,li,ol,h1,h2,h3,h4,h5,h6,p{margin:0;padding:0}body{font-size:14px;color:#4a4a4a;line-height:26px;background:#ffffff}ul,li,ol{list-style:none}a{text-decoration:none}a:visited{color:#0f1222}.ctn-block{width:1200px;padding:0 20px;margin:0 auto}.text-center{text-align:center}.reading-area{display:flex;padding:60px 0;min-height:600px}.reading-area .main-content{width:900px;padding:30px}.reading-area .side-bar{flex:1;padding:18px 0;border-left:1px solid #eaecef}. [...]
diff --git a/assets/index.8d1f9740.js b/assets/index.8d1f9740.js
deleted file mode 100644
index 41329ed..0000000
--- a/assets/index.8d1f9740.js
+++ /dev/null
@@ -1 +0,0 @@
-import{r as e,a as t,o as n,c as a,b as o,d as s,w as i,n as r,t as l,u as c,e as u,f as m,g as p,h as d,i as h}from"./vendor.12a5b039.js";!function(){const e=document.createElement("link").relList;if(!(e&&e.supports&&e.supports("modulepreload"))){for(const e of document.querySelectorAll('link[rel="modulepreload"]'))t(e);new MutationObserver((e=>{for(const n of e)if("childList"===n.type)for(const e of n.addedNodes)"LINK"===e.tagName&&"modulepreload"===e.rel&&t(e)})).observe(document,{chi [...]
diff --git a/assets/index.97098d19.js b/assets/index.97098d19.js
new file mode 100644
index 0000000..7b6fcaa
--- /dev/null
+++ b/assets/index.97098d19.js
@@ -0,0 +1 @@
+import{o as e,c as n,m as t,b as i,e as o,r,l as s,u as l}from"./vendor.1180558b.js";const a={class:"markdown-body"},c=[t('<h2>Tuning and troubleshooting</h2><p>In the process of preparing for the release of a version, we will try our best to find deployment and installation problems in advance and then repair them. Because everyone has some differences in the deployment environments, we sometimes have no way to predict all the problems and solutions in advance. However, due to the exist [...]
diff --git a/assets/index.9c41b9ea.js b/assets/index.9c41b9ea.js
new file mode 100644
index 0000000..31cd764
--- /dev/null
+++ b/assets/index.9c41b9ea.js
@@ -0,0 +1 @@
+import{o as t,c as e,m as a,r as n,l as i,u as r}from"./vendor.1180558b.js";const s={class:"markdown-body"},d=[a('<h2>1 Overview</h2><p>        Linkis, as a powerful computing middleware, can easily interface with different computing engines. By shielding the usage details of different computing engines, it provides a The unified use interface greatly reduces the operation and maintenance cost of deploying and applying Linkis’s big data platform. At present, Linkis has docked several mai [...]
diff --git a/assets/index.9fb4d9d9.js b/assets/index.9fb4d9d9.js
new file mode 100644
index 0000000..084f062
--- /dev/null
+++ b/assets/index.9fb4d9d9.js
@@ -0,0 +1 @@
+import{o as e,c as a,b as n,e as o,r,l as t,u as i}from"./vendor.1180558b.js";var l="/assets/linkis-exception-01.a30b0cae.png",s="/assets/linkis-exception-02.c5d295a9.png",c="/assets/hive-config-01.e5d22d71.png",p="/assets/linkis-exception-03.8fc2f10f.png",u="/assets/page-show-01.f6ac5799.png",h="/assets/db-config-01.5aa0a782.png",d="/assets/linkis-exception-04.bb6736c1.png",g="/assets/shell-error-01.2e9d62b8.png",v="/assets/linkis-exception-05.9b7af564.png",m="/assets/page-show-02.9d59c [...]
diff --git a/assets/index.b0fb8393.js b/assets/index.b0fb8393.js
new file mode 100644
index 0000000..5d1b234
--- /dev/null
+++ b/assets/index.b0fb8393.js
@@ -0,0 +1 @@
+import{o as e,c as n,b as s,r as l,l as a,u as i}from"./vendor.1180558b.js";const o={class:"markdown-body"},t=[s("h1",null,"Overview",-1),s("p",null,"        Linkis considered the scalability of the access method at the beginning of the design. For different access scenarios, Linkis provides front-end access and SDK access. HTTP and WebSocket interfaces are also provided on the basis of front-end interfaces. If you are interested in accessing and using Linkis, you can refer to the follow [...]
diff --git a/assets/index.ba4cbe23.js b/assets/index.ba4cbe23.js
new file mode 100644
index 0000000..6adeaa3
--- /dev/null
+++ b/assets/index.ba4cbe23.js
@@ -0,0 +1 @@
+import{s}from"./index.c319b82e.js";import{_ as e}from"./plugin-vue_export-helper.5a098b48.js";import{r as A,o as a,c as t,b as c,t as i,u as l,p as g,j as m,e as n}from"./vendor.1180558b.js";const r=s=>(g("data-v-39cd9a1d"),s=s(),m(),s),o={class:"home-page slogan"},d={class:"ctn-block"},p={class:"banner text-center"},I=r((()=>c("h1",{class:"home-title"},[c("span",{class:"apache"},"Apache"),n(),c("span",{class:"linkis"},"Linkis"),n(),c("span",{class:"badge"},"Incubating")],-1))),b=["inner [...]
diff --git a/assets/index.c319b82e.js b/assets/index.c319b82e.js
new file mode 100644
index 0000000..d9a56d6
--- /dev/null
+++ b/assets/index.c319b82e.js
@@ -0,0 +1 @@
+import{r as e,a as n,o as t,c as a,b as i,d as o,w as r,e as s,t as c,n as l,u as m,f as h,g as p,h as u,i as d}from"./vendor.1180558b.js";!function(){const e=document.createElement("link").relList;if(!(e&&e.supports&&e.supports("modulepreload"))){for(const e of document.querySelectorAll('link[rel="modulepreload"]'))n(e);new MutationObserver((e=>{for(const t of e)if("childList"===t.type)for(const e of t.addedNodes)"LINK"===e.tagName&&"modulepreload"===e.rel&&n(e)})).observe(document,{chi [...]
diff --git a/assets/index.c51fb506.js b/assets/index.c51fb506.js
deleted file mode 100644
index 9677e28..0000000
--- a/assets/index.c51fb506.js
+++ /dev/null
@@ -1 +0,0 @@
-import{s}from"./index.8d1f9740.js";import{_ as e}from"./plugin-vue_export-helper.5a098b48.js";import{o as a,c as t,b as c,t as i,u as l,p as m,j as n,e as r}from"./vendor.12a5b039.js";const A=[{url:"招联消费金融有限公司.png"},{url:"平安.png"},{url:"荣耀.png"},{url:"360.png"},{url:"天翼云.png"},{url:"理想汽车.png"},{url:"萨摩耶云.png"},{url:"蔚来汽车.jpg"},{url:"T3出行.png"},{url:"百望云.png"},{url:"海康威视.png"},{url:"立创商城.png"},{url:"红象云腾.png"},{url:"艾佳生活.jpg"},{url:"顶点软件.png"},{url:"97物联.png"},{url:"航天信息.png"},{url:"boss直 [...]
diff --git a/assets/index.c935709d.js b/assets/index.c935709d.js
new file mode 100644
index 0000000..19c29f4
--- /dev/null
+++ b/assets/index.c935709d.js
@@ -0,0 +1 @@
+import{u as e}from"./utils.7ca2fb6d.js";import{_ as i}from"./plugin-vue_export-helper.5a098b48.js";import{a as n,o as t,c as l,b as c,d as o,F as a,k as r,l as s,w as d,e as u,t as k}from"./vendor.1180558b.js";const g={info:{},list:[{title:"Apache Linkis Introduction",link:"/docs"},{title:"Deployment",link:"/docs/deploy/linkis",children:[{title:"Quick Deploy",link:"/docs/deploy/linkis"},{title:"EngineConnPlugin installation",link:"/docs/deploy/engins"},{title:"Cluster Deployment",link:"/ [...]
diff --git a/assets/main.3104c8a7.js b/assets/index.cd1b8a2e.js
similarity index 87%
rename from assets/main.3104c8a7.js
rename to assets/index.cd1b8a2e.js
index 5994b56..705b1aa 100644
--- a/assets/main.3104c8a7.js
+++ b/assets/index.cd1b8a2e.js
@@ -1 +1 @@
-import{o as e,c as s,b as a,r as o,l as t,u as l}from"./vendor.12a5b039.js";const r={class:"markdown-body"},n=[a("h1",null,"部署文档english",-1)],u={setup:(a,{expose:o})=>(o({frontmatter:{}}),(a,o)=>(e(),s("div",r,n)))},d={class:"markdown-body"},c=[a("h1",null,"部署文档",-1)],m={setup:(a,{expose:o})=>(o({frontmatter:{}}),(a,o)=>(e(),s("div",d,c)))},p={setup(s){const a=o(localStorage.getItem("locale")||"en");return(s,o)=>"en"===a.value?(e(),t(l(u),{key:0})):(e(),t(l(m),{key:1}))}};export{p as default};
+import{o as e,c as s,b as a,r as o,l as t,u as l}from"./vendor.1180558b.js";const r={class:"markdown-body"},n=[a("h1",null,"部署文档english",-1)],u={setup:(a,{expose:o})=>(o({frontmatter:{}}),(a,o)=>(e(),s("div",r,n)))},d={class:"markdown-body"},c=[a("h1",null,"部署文档",-1)],m={setup:(a,{expose:o})=>(o({frontmatter:{}}),(a,o)=>(e(),s("div",d,c)))},p={setup(s){const a=o(localStorage.getItem("locale")||"en");return(s,o)=>"en"===a.value?(e(),t(l(u),{key:0})):(e(),t(l(m),{key:1}))}};export{p as default};
diff --git a/assets/jdbc-conf.7cf06ba9.js b/assets/jdbc-conf.7cf06ba9.js
new file mode 100644
index 0000000..8af39f1
--- /dev/null
+++ b/assets/jdbc-conf.7cf06ba9.js
@@ -0,0 +1 @@
+var s="/assets/jdbc-conf.9520dcb1.png";export{s as _};
diff --git a/assets/jdbc-conf.9520dcb1.png b/assets/jdbc-conf.9520dcb1.png
new file mode 100644
index 0000000..605a006
Binary files /dev/null and b/assets/jdbc-conf.9520dcb1.png differ
diff --git a/assets/jdbc-run.b39db252.png b/assets/jdbc-run.b39db252.png
new file mode 100644
index 0000000..2e0f47e
Binary files /dev/null and b/assets/jdbc-run.b39db252.png differ
diff --git a/assets/jdbc.4fc1629f.js b/assets/jdbc.4fc1629f.js
new file mode 100644
index 0000000..d51af9e
--- /dev/null
+++ b/assets/jdbc.4fc1629f.js
@@ -0,0 +1 @@
+import{_ as n}from"./jdbc-conf.7cf06ba9.js";import{o as e,c as i,b as l,e as t,r as o,l as s,u as a}from"./vendor.1180558b.js";const u={class:"markdown-body"},r=[l("h1",null,"JDBC engine usage documentation",-1),l("p",null,"This article mainly introduces the configuration, deployment and use of JDBC engine in Linkis1.0.",-1),l("h2",null,"1. Environment configuration before using the JDBC engine",-1),l("p",null,"If you want to use the JDBC engine on your server, you need to prepare the JD [...]
diff --git a/assets/JobSubmission.cf4b12e7.js b/assets/job_submission.5703dc56.js
similarity index 99%
rename from assets/JobSubmission.cf4b12e7.js
rename to assets/job_submission.5703dc56.js
index d8aa54c..7b18059 100644
--- a/assets/JobSubmission.cf4b12e7.js
+++ b/assets/job_submission.5703dc56.js
@@ -1 +1 @@
-import{o as e,c as n,b as t,e as i,r as a,l,u as s}from"./vendor.12a5b039.js";var o="/assets/submission.22e30fbd.png",r="/assets/orchestrate.b395b673.png",c="/assets/physical_tree.6d05f37c.png",u="/assets/result_acquisition.ccd9e593.png";const h={class:"markdown-body"},p=[t("h1",null,"Job submission, preparation and execution process",-1),t("p",null,"The submission and execution of computing tasks (Job) is the core capability provided by Linkis. It almost colludes with all modules in the [...]
+import{o as e,c as n,b as t,e as i,r as a,l,u as s}from"./vendor.1180558b.js";var o="/assets/submission.22e30fbd.png",r="/assets/orchestrate.b395b673.png",c="/assets/physical_tree.6d05f37c.png",u="/assets/result_acquisition.ccd9e593.png";const h={class:"markdown-body"},p=[t("h1",null,"Job submission, preparation and execution process",-1),t("p",null,"The submission and execution of computing tasks (Job) is the core capability provided by Linkis. It almost colludes with all modules in the [...]
diff --git a/assets/label-manager-01.530390e5.png b/assets/label-manager-01.530390e5.png
new file mode 100644
index 0000000..28bf7d9
Binary files /dev/null and b/assets/label-manager-01.530390e5.png differ
diff --git a/assets/label_manager.6b95dcc1.js b/assets/label_manager.6b95dcc1.js
new file mode 100644
index 0000000..3f262ee
--- /dev/null
+++ b/assets/label_manager.6b95dcc1.js
@@ -0,0 +1 @@
+import{o as e,c as l,b as a,e as t,r as n,l as i,u as s}from"./vendor.1180558b.js";var o="/assets/label_manager_global.91aa80e7.png",r="/assets/label_manager_scorer.fd531e4a.png";const c={class:"markdown-body"},u=[a("h2",null,"LabelManager architecture design",-1),a("h4",null,"Brief description",-1),a("p",null,"LabelManager is a functional module in Linkis that provides label services to upper-level applications. It uses label technology to manage cluster resource allocation, service nod [...]
diff --git a/assets/label_manager_builder.caf90f90.png b/assets/label_manager_builder.caf90f90.png
new file mode 100644
index 0000000..4896981
Binary files /dev/null and b/assets/label_manager_builder.caf90f90.png differ
diff --git a/assets/label_manager_global.91aa80e7.png b/assets/label_manager_global.91aa80e7.png
new file mode 100644
index 0000000..ca4151a
Binary files /dev/null and b/assets/label_manager_global.91aa80e7.png differ
diff --git a/assets/label_manager_scorer.fd531e4a.png b/assets/label_manager_scorer.fd531e4a.png
new file mode 100644
index 0000000..7213b0b
Binary files /dev/null and b/assets/label_manager_scorer.fd531e4a.png differ
diff --git a/assets/linkis-computation-gov-01.6035615d.png b/assets/linkis-computation-gov-01.6035615d.png
new file mode 100644
index 0000000..4f57ce3
Binary files /dev/null and b/assets/linkis-computation-gov-01.6035615d.png differ
diff --git a/assets/linkis-computation-gov-02.43fad13f.png b/assets/linkis-computation-gov-02.43fad13f.png
new file mode 100644
index 0000000..6cf7025
Binary files /dev/null and b/assets/linkis-computation-gov-02.43fad13f.png differ
diff --git a/assets/linkis-contextservice-01.3cb67fd1.png b/assets/linkis-contextservice-01.3cb67fd1.png
new file mode 100644
index 0000000..22e0071
Binary files /dev/null and b/assets/linkis-contextservice-01.3cb67fd1.png differ
diff --git a/assets/linkis-contextservice-02.321a8427.png b/assets/linkis-contextservice-02.321a8427.png
new file mode 100644
index 0000000..b47f337
Binary files /dev/null and b/assets/linkis-contextservice-02.321a8427.png differ
diff --git a/assets/linkis-engineconn-plugin-01.ca85467f.png b/assets/linkis-engineconn-plugin-01.ca85467f.png
new file mode 100644
index 0000000..2d2d134
Binary files /dev/null and b/assets/linkis-engineconn-plugin-01.ca85467f.png differ
diff --git a/assets/linkis-intro-01.71fb2144.png b/assets/linkis-intro-01.71fb2144.png
new file mode 100644
index 0000000..60b575d
Binary files /dev/null and b/assets/linkis-intro-01.71fb2144.png differ
diff --git a/assets/linkis-intro-03.65d1a7b1.png b/assets/linkis-intro-03.65d1a7b1.png
new file mode 100644
index 0000000..79fdcd3
Binary files /dev/null and b/assets/linkis-intro-03.65d1a7b1.png differ
diff --git a/assets/linkis-manager-01.fb5e443a.png b/assets/linkis-manager-01.fb5e443a.png
new file mode 100644
index 0000000..ab0744a
Binary files /dev/null and b/assets/linkis-manager-01.fb5e443a.png differ
diff --git a/assets/linkis-microservice-gov-01.2e1292b0.png b/assets/linkis-microservice-gov-01.2e1292b0.png
new file mode 100644
index 0000000..0287117
Binary files /dev/null and b/assets/linkis-microservice-gov-01.2e1292b0.png differ
diff --git a/assets/linkis-microservice-gov-03.9ece64b6.png b/assets/linkis-microservice-gov-03.9ece64b6.png
new file mode 100644
index 0000000..8a2763f
Binary files /dev/null and b/assets/linkis-microservice-gov-03.9ece64b6.png differ
diff --git a/assets/linkis-publicservice-01.bc9338bf.png b/assets/linkis-publicservice-01.bc9338bf.png
new file mode 100644
index 0000000..befd7de
Binary files /dev/null and b/assets/linkis-publicservice-01.bc9338bf.png differ
diff --git a/assets/linkis.cdbb993f.js b/assets/linkis.cdbb993f.js
new file mode 100644
index 0000000..c9d3758
--- /dev/null
+++ b/assets/linkis.cdbb993f.js
@@ -0,0 +1 @@
+import{o as n,c as l,b as e,e as t,r as o,l as a,u as s}from"./vendor.1180558b.js";var i="/assets/Linkis1.0_combined_eureka.dad2589e.png";const u={class:"markdown-body"},r=[e("h1",null,"Linkis1.0 Deployment document",-1),e("h2",null,"Notes",-1),e("p",null,[t("If you are new to Linkis, you can ignore this chapter, however, if you are already a Linkis user, we recommend you reading the following article before installing or upgrading: "),e("a",{href:"/#/docs/architecture/difference"},"Brie [...]
diff --git a/assets/linkis.d0790396.js b/assets/linkis.d0790396.js
deleted file mode 100644
index f21cc5d..0000000
--- a/assets/linkis.d0790396.js
+++ /dev/null
@@ -1 +0,0 @@
-import{o as n,c as l,b as e,e as t,r as o,l as a,u as s}from"./vendor.12a5b039.js";var i="/assets/Linkis1.0_combined_eureka.dad2589e.png";const u={class:"markdown-body"},r=[e("h1",null,"Linkis1.0 Deployment document",-1),e("h2",null,"Notes",-1),e("p",null,[t("If you are new to Linkis, you can ignore this chapter, however, if you are already a Linkis user, we recommend you reading the following article before installing or upgrading: "),e("a",{href:"#/docs/architecture/DifferenceBetween1. [...]
diff --git a/assets/CliManual.8440dc3f.js b/assets/linkis_cli.56d856c4.js
similarity index 99%
rename from assets/CliManual.8440dc3f.js
rename to assets/linkis_cli.56d856c4.js
index 9f950c2..506b60f 100644
--- a/assets/CliManual.8440dc3f.js
+++ b/assets/linkis_cli.56d856c4.js
@@ -1 +1 @@
-import{o as t,c as e,m as d,r as a,l as i,u as o}from"./vendor.12a5b039.js";const r={class:"markdown-body"},n=[d('<h1>Linkis-Cli usage documentation</h1><h2>Introduction</h2><p>Linkis-Cli is a shell command line program used to submit tasks to Linkis.</p><h2>Basic case</h2><p>You can simply submit a task to Linkis by referring to the example below</p><p>The first step is to check whether the default configuration file <code>linkis-cli.properties</code> exists in the conf/ directory, and  [...]
+import{o as t,c as e,m as d,r as a,l as i,u as o}from"./vendor.1180558b.js";const r={class:"markdown-body"},n=[d('<h1>Linkis-Cli usage documentation</h1><h2>Introduction</h2><p>Linkis-Cli is a shell command line program used to submit tasks to Linkis.</p><h2>Basic case</h2><p>You can simply submit a task to Linkis by referring to the example below</p><p>The first step is to check whether the default configuration file <code>linkis-cli.properties</code> exists in the conf/ directory, and  [...]
diff --git a/assets/logo.fb11029b.png b/assets/logo.fb11029b.png
deleted file mode 100644
index 9ece550..0000000
Binary files a/assets/logo.fb11029b.png and /dev/null differ
diff --git a/assets/manager.6973d707.js b/assets/manager.6973d707.js
new file mode 100644
index 0000000..c5a67e3
--- /dev/null
+++ b/assets/manager.6973d707.js
@@ -0,0 +1 @@
+import{o as n,c as l,b as e,e as t,r as a,l as i,u}from"./vendor.1180558b.js";var r="/assets/linkis-manager-01.fb5e443a.png",o="/assets/app-manager-03.5aaff6ed.png",s="/assets/resource-manager-01.86e09124.png";const g={class:"markdown-body"},c=[e("h1",null,"LinkisManager Architecture Design",-1),e("p",null,"        As an independent microservice of Linkis, LinkisManager provides AppManager (application management), ResourceManager (resource management), and LabelManager (label management [...]
diff --git a/assets/microservice_governance.e72bfd46.js b/assets/microservice_governance.e72bfd46.js
new file mode 100644
index 0000000..1c10533
--- /dev/null
+++ b/assets/microservice_governance.e72bfd46.js
@@ -0,0 +1 @@
+import{o as e,c as i,b as r,e as n,r as s,l as t,u as a}from"./vendor.1180558b.js";var o="/assets/linkis-microservice-gov-01.2e1292b0.png",l="/assets/linkis-microservice-gov-03.9ece64b6.png";const c={class:"markdown-body"},u=[r("h2",null,[r("strong",null,"Background")],-1),r("p",null,"Microservice governance includes three main microservices: Gateway, Eureka and Open Feign. It is used to solve Linkis’s service discovery and registration, unified gateway, request forwarding, inter-service [...]
diff --git a/assets/mobtech.b333dc91.png b/assets/mobtech.b333dc91.png
deleted file mode 100644
index 080268d..0000000
Binary files a/assets/mobtech.b333dc91.png and /dev/null differ
diff --git a/assets/mobtech.e2567e09.png b/assets/mobtech.e2567e09.png
new file mode 100644
index 0000000..a913f12
Binary files /dev/null and b/assets/mobtech.e2567e09.png differ
diff --git a/assets/orchestration.e1c8bd97.png b/assets/orchestration.e1c8bd97.png
new file mode 100644
index 0000000..77d03b0
Binary files /dev/null and b/assets/orchestration.e1c8bd97.png differ
diff --git a/assets/public-enhencement-architecture.6597436f.png b/assets/public-enhencement-architecture.6597436f.png
new file mode 100644
index 0000000..35f4a6c
Binary files /dev/null and b/assets/public-enhencement-architecture.6597436f.png differ
diff --git a/assets/public_enhancement.626e701e.js b/assets/public_enhancement.626e701e.js
new file mode 100644
index 0000000..229307c
--- /dev/null
+++ b/assets/public_enhancement.626e701e.js
@@ -0,0 +1 @@
+import{o as l,c as n,b as e,r as t,l as u,u as r}from"./vendor.1180558b.js";var a="/assets/public-enhencement-architecture.6597436f.png";const o={class:"markdown-body"},i=[e("h1",null,"PublicEnhencementService (PS) architecture design",-1),e("p",null,"PublicEnhancementService (PS): Public enhancement service, a module that provides functions such as unified configuration management, context service, physical library, data source management, microservice management, and historical task qu [...]
diff --git a/assets/public_service.8f4dd101.js b/assets/public_service.8f4dd101.js
new file mode 100644
index 0000000..9581ac3
--- /dev/null
+++ b/assets/public_service.8f4dd101.js
@@ -0,0 +1 @@
+import{o as e,c as l,b as n,r as i,l as a,u as s}from"./vendor.1180558b.js";var t="/assets/linkis-publicservice-01.bc9338bf.png";const o={class:"markdown-body"},r=[n("h2",null,[n("strong",null,"Background")],-1),n("p",null,"PublicService is a comprehensive service composed of multiple sub-modules such as “configuration”, “jobhistory”, “udf”, “variable”, etc. Linkis 1.0 added label management based on version 0.9. Linkis doesn’t need to set the parameters every time during the execution o [...]
diff --git a/assets/pyspakr-run.9c36d9ef.png b/assets/pyspakr-run.9c36d9ef.png
new file mode 100644
index 0000000..fd0cf54
Binary files /dev/null and b/assets/pyspakr-run.9c36d9ef.png differ
diff --git a/assets/python-run.25fd075c.png b/assets/python-run.25fd075c.png
new file mode 100644
index 0000000..8b1c97c
Binary files /dev/null and b/assets/python-run.25fd075c.png differ
diff --git a/assets/python.17efbf15.js b/assets/python.17efbf15.js
new file mode 100644
index 0000000..d16cf1b
--- /dev/null
+++ b/assets/python.17efbf15.js
@@ -0,0 +1 @@
+import{_ as n}from"./jdbc-conf.7cf06ba9.js";import{o as t,c as e,b as o,e as l,r as i,l as h,u}from"./vendor.1180558b.js";var a="/assets/python-run.25fd075c.png";const s={class:"markdown-body"},p=[o("h1",null,"Python engine usage documentation",-1),o("p",null,"This article mainly introduces the configuration, deployment and use of the Python engine in Linkis1.0.",-1),o("h2",null,"1. Environment configuration before using Python engine",-1),o("p",null,"If you want to use the python engine [...]
diff --git a/assets/queue-set.3007a0ca.png b/assets/queue-set.3007a0ca.png
new file mode 100644
index 0000000..e818025
Binary files /dev/null and b/assets/queue-set.3007a0ca.png differ
diff --git a/assets/resource-manager-01.86e09124.png b/assets/resource-manager-01.86e09124.png
new file mode 100644
index 0000000..f8efda1
Binary files /dev/null and b/assets/resource-manager-01.86e09124.png differ
diff --git a/assets/resource_manager.ce0e10f4.js b/assets/resource_manager.ce0e10f4.js
new file mode 100644
index 0000000..ad810ad
--- /dev/null
+++ b/assets/resource_manager.ce0e10f4.js
@@ -0,0 +1 @@
+import{o as e,c as l,b as n,e as r,r as t,l as o,u as a}from"./vendor.1180558b.js";var u="/assets/linkis-manager-01.fb5e443a.png",s="/assets/resource-manager-01.86e09124.png";const i={class:"markdown-body"},c=[n("h2",null,"1. Background",-1),n("p",null,"        ResourceManager (RM for short) is the computing resource management module of Linkis. All EngineConn (EC for short), EngineConnManager (ECM for short), and even external resources including Yarn are managed by RM. RM can manage re [...]
diff --git a/assets/rm-03.8382829b.png b/assets/rm-03.8382829b.png
new file mode 100644
index 0000000..a3716d9
Binary files /dev/null and b/assets/rm-03.8382829b.png differ
diff --git a/assets/rm-04.2385c2db.png b/assets/rm-04.2385c2db.png
new file mode 100644
index 0000000..bbc0050
Binary files /dev/null and b/assets/rm-04.2385c2db.png differ
diff --git a/assets/rm-05.347294cd.png b/assets/rm-05.347294cd.png
new file mode 100644
index 0000000..e0c5c2c
Binary files /dev/null and b/assets/rm-05.347294cd.png differ
diff --git a/assets/rm-06.dde9d64d.png b/assets/rm-06.dde9d64d.png
new file mode 100644
index 0000000..9d04c98
Binary files /dev/null and b/assets/rm-06.dde9d64d.png differ
diff --git a/assets/scala-run.62f19952.png b/assets/scala-run.62f19952.png
new file mode 100644
index 0000000..a469a1e
Binary files /dev/null and b/assets/scala-run.62f19952.png differ
diff --git a/assets/searching_keywords.41a60149.png b/assets/searching_keywords.41a60149.png
new file mode 100644
index 0000000..f578266
Binary files /dev/null and b/assets/searching_keywords.41a60149.png differ
diff --git a/assets/shell-run.6a5566b5.png b/assets/shell-run.6a5566b5.png
new file mode 100644
index 0000000..de28817
Binary files /dev/null and b/assets/shell-run.6a5566b5.png differ
diff --git a/assets/shell.06015d78.js b/assets/shell.06015d78.js
new file mode 100644
index 0000000..58588d9
--- /dev/null
+++ b/assets/shell.06015d78.js
@@ -0,0 +1 @@
+import{o as e,c as l,b as n,e as t,r as s,l as i,u as h}from"./vendor.1180558b.js";const o={class:"markdown-body"},u=[n("h1",null,"Shell engine usage document",-1),n("p",null,"This article mainly introduces the configuration, deployment and use of Shell engine in Linkis1.0",-1),n("h2",null,"1. The environment configuration before using the Shell engine",-1),n("p",null,"If you want to use the shell engine on your server, you need to ensure that the user’s PATH has the bash execution direc [...]
diff --git a/assets/spark-conf.9e59a279.png b/assets/spark-conf.9e59a279.png
new file mode 100644
index 0000000..a0a07d0
Binary files /dev/null and b/assets/spark-conf.9e59a279.png differ
diff --git a/assets/spark.e086b785.js b/assets/spark.e086b785.js
new file mode 100644
index 0000000..b638cb7
--- /dev/null
+++ b/assets/spark.e086b785.js
@@ -0,0 +1 @@
+import{_ as n,a as e}from"./workflow.72652f4e.js";import{o as l,c as t,b as a,e as s,r as i,l as u,u as r}from"./vendor.1180558b.js";var o="/assets/sparksql_run.115bb5a7.png",p="/assets/pyspakr-run.9c36d9ef.png",h="/assets/spark-conf.9e59a279.png";const c={class:"markdown-body"},k=[a("h1",null,"Spark engine usage documentation",-1),a("p",null,"This article mainly introduces the configuration, deployment and use of spark engine in Linkis1.0.",-1),a("h2",null,"1. Environment configuration  [...]
diff --git a/assets/structure.1bc4dbfc.js b/assets/structure.2309b7ab.js
similarity index 99%
rename from assets/structure.1bc4dbfc.js
rename to assets/structure.2309b7ab.js
index 129aa52..1ff021e 100644
--- a/assets/structure.1bc4dbfc.js
+++ b/assets/structure.2309b7ab.js
@@ -1 +1 @@
-import{o as i,c as e,m as n,r as s,l as r,u as o}from"./vendor.12a5b039.js";const t={class:"markdown-body"},l=[n("<h1>Installation directory structure</h1><p>The directory structure of Linkis1.0 is very different from the 0.X version. Each microservice in 0.X has a root directory that exists independently. The main advantage of this directory structure is that it is easy to distinguish microservices and facilitate individual Microservices are managed, but there are some obvious problems: [...]
+import{o as i,c as e,m as n,r as s,l as r,u as o}from"./vendor.1180558b.js";const t={class:"markdown-body"},l=[n("<h1>Installation directory structure</h1><p>The directory structure of Linkis1.0 is very different from the 0.X version. Each microservice in 0.X has a root directory that exists independently. The main advantage of this directory structure is that it is easy to distinguish microservices and facilitate individual Microservices are managed, but there are some obvious problems: [...]
diff --git a/assets/team.13ce5e55.css b/assets/team.04f1ab61.css
similarity index 51%
rename from assets/team.13ce5e55.css
rename to assets/team.04f1ab61.css
index cc81f61..72d05cd 100644
--- a/assets/team.13ce5e55.css
+++ b/assets/team.04f1ab61.css
@@ -1 +1 @@
-.team-page .contributor-list[data-v-e3df63b0]{padding:20px 0 40px}.team-page .contributor-list .contributor-item[data-v-e3df63b0]{display:inline-block;margin-right:20px;margin-bottom:20px;padding:16px 16px 16px 48px;background-size:24px;background-position:16px center;background-repeat:no-repeat;color:#0f1222;border:1px solid rgba(15,18,34,.2);border-radius:4px}.team-page .contributor-list .contributor-item[data-v-e3df63b0]:last-child{margin-right:0}.team-page .character-list[data-v-e3df [...]
+.team-page .contributor-list[data-v-0399096f]{padding:20px 0 40px}.team-page .contributor-list .contributor-item[data-v-0399096f]{display:inline-block;margin-right:20px;margin-bottom:20px;padding:16px 16px 16px 48px;background-size:24px;background-position:16px center;background-repeat:no-repeat;color:#0f1222;border:1px solid rgba(15,18,34,.2);border-radius:4px}.team-page .contributor-list .contributor-item[data-v-0399096f]:last-child{margin-right:0}.team-page .character-list[data-v-0399 [...]
diff --git a/assets/team.c0178c87.js b/assets/team.c0178c87.js
deleted file mode 100644
index 6acf816..0000000
--- a/assets/team.c0178c87.js
+++ /dev/null
@@ -1 +0,0 @@
-import{u as a}from"./utils.7ca2fb6d.js";import{_ as t}from"./plugin-vue_export-helper.5a098b48.js";import{o as e,c as i,b as r,F as h,k as s,p as u,j as n,t as c}from"./vendor.12a5b039.js";const o={info:{desc:'The Linkis team is comprised of Members and Contributors. Members have direct access to the source of Linkis project and actively evolve the code-base. Contributors improve the project through submission of patches and suggestions to the Members. The number of Contributors to the p [...]
diff --git a/assets/team.e10d896f.js b/assets/team.e10d896f.js
new file mode 100644
index 0000000..6d38ed6
--- /dev/null
+++ b/assets/team.e10d896f.js
@@ -0,0 +1 @@
+import{u as a}from"./utils.7ca2fb6d.js";import{_ as t}from"./plugin-vue_export-helper.5a098b48.js";import{o as i,c as e,b as u,t as r,F as h,k as s,p as n,j as c}from"./vendor.1180558b.js";const g={info:{tip:"(In no particular order)",desc:'You can participate in the contribution of Apache Linkis by reporting bugs/submitting new features or improvement suggestions/submitting patches/ writing or refining documents/attending community Q&A/organizing community activities, etc. For detailed  [...]
diff --git a/assets/tuning.45470047.js b/assets/tuning.45470047.js
new file mode 100644
index 0000000..9407a8b
--- /dev/null
+++ b/assets/tuning.45470047.js
@@ -0,0 +1 @@
+import{o as e,c as n,m as t,r as a,l as r,u as o}from"./vendor.1180558b.js";const s={class:"markdown-body"},i=[t('<blockquote><p>Linkis0.x version runs stably on the production environment of WeBank, and supports various businesses. Linkis1.0 is an optimized version of 0.x, and the related tuning logic has not changed, so this document will introduce several Linkis deployment and tuning suggestions. Due to limited space, this article cannot cover all optimization scenarios. Related tunin [...]
diff --git a/assets/UserManual.905b8e9a.js b/assets/user.4c9df01e.js
similarity index 99%
rename from assets/UserManual.905b8e9a.js
rename to assets/user.4c9df01e.js
index 37caa9b..553d32d 100644
--- a/assets/UserManual.905b8e9a.js
+++ b/assets/user.4c9df01e.js
@@ -1 +1 @@
-import{o as e,c as t,m as n,r as i,l as o,u as s}from"./vendor.12a5b039.js";const l={class:"markdown-body"},a=[n('<h1>Linkis User Manual</h1><blockquote><p>Linkis provides a convenient interface for calling JAVA and SCALA. It can be used only by introducing the linkis-computation-client module. After 1.0, the method of submitting with Label is added. The following will introduce both ways that compatible with 0.X and newly added in 1.0.</p></blockquote><h2>1. Introduce dependent modules< [...]
+import{o as e,c as t,m as n,r as i,l as o,u as s}from"./vendor.1180558b.js";const l={class:"markdown-body"},a=[n('<h1>Linkis User Manual</h1><blockquote><p>Linkis provides a convenient interface for calling JAVA and SCALA. It can be used only by introducing the linkis-computation-client module. After 1.0, the method of submitting with Label is added. The following will introduce both ways that compatible with 0.X and newly added in 1.0.</p></blockquote><h2>1. Introduce dependent modules< [...]
diff --git a/assets/vendor.12a5b039.js b/assets/vendor.1180558b.js
similarity index 58%
rename from assets/vendor.12a5b039.js
rename to assets/vendor.1180558b.js
index ca31aa9..df70269 100644
--- a/assets/vendor.12a5b039.js
+++ b/assets/vendor.1180558b.js
@@ -1,21 +1,21 @@
-function e(e,t){const n=Object.create(null),r=e.split(",");for(let o=0;o<r.length;o++)n[r[o]]=!0;return t?e=>!!n[e.toLowerCase()]:e=>!!n[e]}const t=e("itemscope,allowfullscreen,formnovalidate,ismap,nomodule,novalidate,readonly");function n(e){return!!e||""===e}function r(e){if(k(e)){const t={};for(let n=0;n<e.length;n++){const o=e[n],s=x(o)?l(o):r(o);if(s)for(const e in s)t[e]=s[e]}return t}return x(e)||T(e)?e:void 0}const o=/;(?![^(]*\))/g,s=/:(.+)/;function l(e){const t={};return e.spl [...]
+function e(e,t){const n=Object.create(null),r=e.split(",");for(let o=0;o<r.length;o++)n[r[o]]=!0;return t?e=>!!n[e.toLowerCase()]:e=>!!n[e]}const t=e("itemscope,allowfullscreen,formnovalidate,ismap,nomodule,novalidate,readonly");function n(e){return!!e||""===e}function r(e){if(k(e)){const t={};for(let n=0;n<e.length;n++){const o=e[n],s=x(o)?l(o):r(o);if(s)for(const e in s)t[e]=s[e]}return t}return x(e)||T(e)?e:void 0}const o=/;(?![^(]*\))/g,s=/:(.+)/;function l(e){const t={};return e.spl [...]
 /*!
   * vue-router v4.0.11
   * (c) 2021 Eduardo San Martin Morote
   * @license MIT
-  */(e);if(!r)return;const o=t._component;L(o)||o.render||o.template||(o.template=r.innerHTML),r.innerHTML="";const s=n(r,!1,r instanceof SVGElement);return r instanceof Element&&(r.removeAttribute("v-cloak"),r.setAttribute("data-v-app","")),s},t};const Ao="function"==typeof Symbol&&"symbol"==typeof Symbol.toStringTag,No=e=>Ao?Symbol(e):"_vr_"+e,Mo=No("rvlm"),$o=No("rvd"),Do=No("r"),Wo=No("rl"),Uo=No("rvl"),jo="undefined"!=typeof window;const Vo=Object.assign;function Ho(e,t){const n={}; [...]
+  */(e);if(!r)return;const o=t._component;L(o)||o.render||o.template||(o.template=r.innerHTML),r.innerHTML="";const s=n(r,!1,r instanceof SVGElement);return r instanceof Element&&(r.removeAttribute("v-cloak"),r.setAttribute("data-v-app","")),s},t};const Po="function"==typeof Symbol&&"symbol"==typeof Symbol.toStringTag,Ao=e=>Po?Symbol(e):"_vr_"+e,No=Ao("rvlm"),Mo=Ao("rvd"),$o=Ao("r"),Do=Ao("rl"),Wo=Ao("rvl"),Uo="undefined"!=typeof window;const jo=Object.assign;function Vo(e,t){const n={}; [...]
 /*!
   * shared v9.2.0-beta.11
   * (c) 2021 kazuya kawaguchi
   * Released under the MIT License.
-  */(e,t);n=nl(r.reverse(),"beforeRouteLeave",e,t);for(const s of r)s.leaveGuards.forEach((r=>{n.push(tl(r,e,t))}));const c=b.bind(null,e,t);return n.push(c),ul(n).then((()=>{n=[];for(const r of s.list())n.push(tl(r,e,t));return n.push(c),ul(n)})).then((()=>{n=nl(o,"beforeRouteUpdate",e,t);for(const r of o)r.updateGuards.forEach((r=>{n.push(tl(r,e,t))}));return n.push(c),ul(n)})).then((()=>{n=[];for(const r of e.matched)if(r.beforeEnter&&!t.matched.includes(r))if(Array.isArray(r.beforeEn [...]
+  */(e,t);n=tl(r.reverse(),"beforeRouteLeave",e,t);for(const s of r)s.leaveGuards.forEach((r=>{n.push(el(r,e,t))}));const c=b.bind(null,e,t);return n.push(c),il(n).then((()=>{n=[];for(const r of s.list())n.push(el(r,e,t));return n.push(c),il(n)})).then((()=>{n=tl(o,"beforeRouteUpdate",e,t);for(const r of o)r.updateGuards.forEach((r=>{n.push(el(r,e,t))}));return n.push(c),il(n)})).then((()=>{n=[];for(const r of e.matched)if(r.beforeEnter&&!t.matched.includes(r))if(Array.isArray(r.beforeEn [...]
 /*!
   * devtools-if v9.2.0-beta.11
   * (c) 2021 kazuya kawaguchi
   * Released under the MIT License.
-  */const ua="i18n:init",fa="function:translate",pa=[];
+  */const ia="i18n:init",ua="function:translate",fa=[];
 /*!
   * core-base v9.2.0-beta.11
   * (c) 2021 kazuya kawaguchi
   * Released under the MIT License.
-  */pa[0]={w:[0],i:[3,0],"[":[4],o:[7]},pa[1]={w:[1],".":[2],"[":[4],o:[7]},pa[2]={w:[2],i:[3,0],0:[3,0]},pa[3]={i:[3,0],0:[3,0],w:[1,1],".":[2,1],"[":[4,1],o:[7,1]},pa[4]={"'":[5,0],'"':[6,0],"[":[4,2],"]":[1,3],o:8,l:[4,0]},pa[5]={"'":[4,0],o:8,l:[5,0]},pa[6]={'"':[4,0],o:8,l:[6,0]};const da=/^\s?(?:true|false|-?[\d.]+|'[^']*'|"[^"]*")\s?$/;function ma(e){if(null==e)return"o";switch(e.charCodeAt(0)){case 91:case 93:case 46:case 34:case 39:return e;case 95:case 36:case 45:return"i";case [...]
+  */fa[0]={w:[0],i:[3,0],"[":[4],o:[7]},fa[1]={w:[1],".":[2],"[":[4],o:[7]},fa[2]={w:[2],i:[3,0],0:[3,0]},fa[3]={i:[3,0],0:[3,0],w:[1,1],".":[2,1],"[":[4,1],o:[7,1]},fa[4]={"'":[5,0],'"':[6,0],"[":[4,2],"]":[1,3],o:8,l:[4,0]},fa[5]={"'":[4,0],o:8,l:[5,0]},fa[6]={'"':[4,0],o:8,l:[6,0]};const pa=/^\s?(?:true|false|-?[\d.]+|'[^']*'|"[^"]*")\s?$/;function da(e){if(null==e)return"o";switch(e.charCodeAt(0)){case 91:case 93:case 46:case 34:case 39:return e;case 95:case 36:case 45:return"i";case [...]
diff --git a/assets/wedatasphere_contact_01.ce92bdb6.png b/assets/wedatasphere_contact_01.ce92bdb6.png
new file mode 100644
index 0000000..5a3d80e
Binary files /dev/null and b/assets/wedatasphere_contact_01.ce92bdb6.png differ
diff --git a/assets/wedatasphere_stack_Linkis.efef3aa3.png b/assets/wedatasphere_stack_Linkis.efef3aa3.png
new file mode 100644
index 0000000..36060b9
Binary files /dev/null and b/assets/wedatasphere_stack_Linkis.efef3aa3.png differ
diff --git a/assets/workflow.72652f4e.js b/assets/workflow.72652f4e.js
new file mode 100644
index 0000000..5a52703
--- /dev/null
+++ b/assets/workflow.72652f4e.js
@@ -0,0 +1 @@
+var s="/assets/queue-set.3007a0ca.png",a="/assets/workflow.4526f490.png";export{s as _,a};
diff --git "a/assets/\344\270\234\346\226\271\351\200\232.4814e53c.png" "b/assets/\344\270\234\346\226\271\351\200\232.4814e53c.png"
deleted file mode 100644
index 72fde94..0000000
Binary files "a/assets/\344\270\234\346\226\271\351\200\232.4814e53c.png" and /dev/null differ
diff --git "a/assets/\344\270\234\346\226\271\351\200\232.b2758d5e.png" "b/assets/\344\270\234\346\226\271\351\200\232.b2758d5e.png"
new file mode 100644
index 0000000..16d1cf8
Binary files /dev/null and "b/assets/\344\270\234\346\226\271\351\200\232.b2758d5e.png" differ
diff --git "a/assets/\344\270\255\344\275\223\345\275\251\347\247\221\346\212\200.d1ffcc7d.png" "b/assets/\344\270\255\344\275\223\345\275\251\347\247\221\346\212\200.d1ffcc7d.png"
deleted file mode 100644
index c343ba5..0000000
Binary files "a/assets/\344\270\255\344\275\223\345\275\251\347\247\221\346\212\200.d1ffcc7d.png" and /dev/null differ
diff --git "a/assets/\344\270\255\344\275\223\345\275\251\347\247\221\346\212\200.f0458dd2.png" "b/assets/\344\270\255\344\275\223\345\275\251\347\247\221\346\212\200.f0458dd2.png"
new file mode 100644
index 0000000..53790f0
Binary files /dev/null and "b/assets/\344\270\255\344\275\223\345\275\251\347\247\221\346\212\200.f0458dd2.png" differ
diff --git "a/assets/\344\270\255\345\233\275\347\224\265\347\247\221.5bf9bcd0.png" "b/assets/\344\270\255\345\233\275\347\224\265\347\247\221.5bf9bcd0.png"
new file mode 100644
index 0000000..d03b4f4
Binary files /dev/null and "b/assets/\344\270\255\345\233\275\347\224\265\347\247\221.5bf9bcd0.png" differ
diff --git "a/assets/\344\270\255\345\233\275\347\224\265\347\247\221.864feafc.jpg" "b/assets/\344\270\255\345\233\275\347\224\265\347\247\221.864feafc.jpg"
deleted file mode 100644
index 589617f..0000000
Binary files "a/assets/\344\270\255\345\233\275\347\224\265\347\247\221.864feafc.jpg" and /dev/null differ
diff --git "a/assets/\344\270\255\345\233\275\351\200\232\344\277\241\346\234\215\345\212\241.6242b949.png" "b/assets/\344\270\255\345\233\275\351\200\232\344\277\241\346\234\215\345\212\241.6242b949.png"
deleted file mode 100644
index 0ce6990..0000000
Binary files "a/assets/\344\270\255\345\233\275\351\200\232\344\277\241\346\234\215\345\212\241.6242b949.png" and /dev/null differ
diff --git "a/assets/\344\270\255\345\233\275\351\200\232\344\277\241\346\234\215\345\212\241.de1dbff8.png" "b/assets/\344\270\255\345\233\275\351\200\232\344\277\241\346\234\215\345\212\241.de1dbff8.png"
new file mode 100644
index 0000000..153ce6f
Binary files /dev/null and "b/assets/\344\270\255\345\233\275\351\200\232\344\277\241\346\234\215\345\212\241.de1dbff8.png" differ
diff --git "a/assets/\344\270\255\351\200\232\344\272\221\344\273\223.a785e23f.png" "b/assets/\344\270\255\351\200\232\344\272\221\344\273\223.a785e23f.png"
deleted file mode 100644
index 7a27229..0000000
Binary files "a/assets/\344\270\255\351\200\232\344\272\221\344\273\223.a785e23f.png" and /dev/null differ
diff --git "a/assets/\344\270\255\351\200\232\344\272\221\344\273\223.c02b68a5.png" "b/assets/\344\270\255\351\200\232\344\272\221\344\273\223.c02b68a5.png"
new file mode 100644
index 0000000..7c27113
Binary files /dev/null and "b/assets/\344\270\255\351\200\232\344\272\221\344\273\223.c02b68a5.png" differ
diff --git "a/assets/\344\271\213\346\261\237\345\256\236\351\252\214\345\256\244.46d52eec.png" "b/assets/\344\271\213\346\261\237\345\256\236\351\252\214\345\256\244.46d52eec.png"
deleted file mode 100644
index 5f5e7c3..0000000
Binary files "a/assets/\344\271\213\346\261\237\345\256\236\351\252\214\345\256\244.46d52eec.png" and /dev/null differ
diff --git "a/assets/\344\271\213\346\261\237\345\256\236\351\252\214\345\256\244.657671b0.png" "b/assets/\344\271\213\346\261\237\345\256\236\351\252\214\345\256\244.657671b0.png"
new file mode 100644
index 0000000..3149023
Binary files /dev/null and "b/assets/\344\271\213\346\261\237\345\256\236\351\252\214\345\256\244.657671b0.png" differ
diff --git "a/assets/\344\272\221\345\276\222\347\247\221\346\212\200.d6b063f3.png" "b/assets/\344\272\221\345\276\222\347\247\221\346\212\200.d6b063f3.png"
deleted file mode 100644
index 249aaaa..0000000
Binary files "a/assets/\344\272\221\345\276\222\347\247\221\346\212\200.d6b063f3.png" and /dev/null differ
diff --git "a/assets/\344\272\221\345\276\222\347\247\221\346\212\200.e101f4b2.png" "b/assets/\344\272\221\345\276\222\347\247\221\346\212\200.e101f4b2.png"
new file mode 100644
index 0000000..c3760bf
Binary files /dev/null and "b/assets/\344\272\221\345\276\222\347\247\221\346\212\200.e101f4b2.png" differ
diff --git "a/assets/\344\276\235\345\233\276.c76de0a6.png" "b/assets/\344\276\235\345\233\276.c76de0a6.png"
new file mode 100644
index 0000000..76ac6a5
Binary files /dev/null and "b/assets/\344\276\235\345\233\276.c76de0a6.png" differ
diff --git "a/assets/\344\276\235\345\233\276.e1935876.png" "b/assets/\344\276\235\345\233\276.e1935876.png"
deleted file mode 100644
index 58aaa3f..0000000
Binary files "a/assets/\344\276\235\345\233\276.e1935876.png" and /dev/null differ
diff --git "a/assets/\344\277\241\347\224\250\347\224\237\346\264\273.bce0bb69.png" "b/assets/\344\277\241\347\224\250\347\224\237\346\264\273.bce0bb69.png"
new file mode 100644
index 0000000..5f29ff7
Binary files /dev/null and "b/assets/\344\277\241\347\224\250\347\224\237\346\264\273.bce0bb69.png" differ
diff --git "a/assets/\345\215\216\344\270\255\347\247\221\346\212\200\345\244\247\345\255\246.79502b9d.jpg" "b/assets/\345\215\216\344\270\255\347\247\221\346\212\200\345\244\247\345\255\246.79502b9d.jpg"
deleted file mode 100644
index 70e557f..0000000
Binary files "a/assets/\345\215\216\344\270\255\347\247\221\346\212\200\345\244\247\345\255\246.79502b9d.jpg" and /dev/null differ
diff --git "a/assets/\345\215\216\344\270\255\347\247\221\346\212\200\345\244\247\345\255\246.fcf29603.png" "b/assets/\345\215\216\344\270\255\347\247\221\346\212\200\345\244\247\345\255\246.fcf29603.png"
new file mode 100644
index 0000000..d7cbd31
Binary files /dev/null and "b/assets/\345\215\216\344\270\255\347\247\221\346\212\200\345\244\247\345\255\246.fcf29603.png" differ
diff --git "a/assets/\345\223\227\345\225\246\345\225\246.045c3b9e.jpg" "b/assets/\345\223\227\345\225\246\345\225\246.045c3b9e.jpg"
deleted file mode 100644
index 3d94cd0..0000000
Binary files "a/assets/\345\223\227\345\225\246\345\225\246.045c3b9e.jpg" and /dev/null differ
diff --git "a/assets/\345\223\227\345\225\246\345\225\246.2eef0fe4.png" "b/assets/\345\223\227\345\225\246\345\225\246.2eef0fe4.png"
new file mode 100644
index 0000000..cd8da43
Binary files /dev/null and "b/assets/\345\223\227\345\225\246\345\225\246.2eef0fe4.png" differ
diff --git "a/assets/\345\234\210\345\244\226\345\220\214\345\255\246.2bb21f07.png" "b/assets/\345\234\210\345\244\226\345\220\214\345\255\246.2bb21f07.png"
new file mode 100644
index 0000000..8d2d8c2
Binary files /dev/null and "b/assets/\345\234\210\345\244\226\345\220\214\345\255\246.2bb21f07.png" differ
diff --git "a/assets/\345\234\210\345\244\226\345\220\214\345\255\246.9c81d026.png" "b/assets/\345\234\210\345\244\226\345\220\214\345\255\246.9c81d026.png"
deleted file mode 100644
index fc623d4..0000000
Binary files "a/assets/\345\234\210\345\244\226\345\220\214\345\255\246.9c81d026.png" and /dev/null differ
diff --git "a/assets/\345\244\251\347\277\274\344\272\221.719b17b2.png" "b/assets/\345\244\251\347\277\274\344\272\221.719b17b2.png"
new file mode 100644
index 0000000..da490bb
Binary files /dev/null and "b/assets/\345\244\251\347\277\274\344\272\221.719b17b2.png" differ
diff --git "a/assets/\345\244\251\347\277\274\344\272\221.ee336756.png" "b/assets/\345\244\251\347\277\274\344\272\221.ee336756.png"
deleted file mode 100644
index 8973744..0000000
Binary files "a/assets/\345\244\251\347\277\274\344\272\221.ee336756.png" and /dev/null differ
diff --git "a/assets/\345\271\263\345\256\211.1f145bbc.png" "b/assets/\345\271\263\345\256\211.1f145bbc.png"
new file mode 100644
index 0000000..cf2d114
Binary files /dev/null and "b/assets/\345\271\263\345\256\211.1f145bbc.png" differ
diff --git "a/assets/\345\271\263\345\256\211.d0212a59.png" "b/assets/\345\271\263\345\256\211.d0212a59.png"
deleted file mode 100644
index 4895178..0000000
Binary files "a/assets/\345\271\263\345\256\211.d0212a59.png" and /dev/null differ
diff --git "a/assets/\346\210\220\351\203\275\345\244\247\346\225\260\346\215\256.3da8e88f.png" "b/assets/\346\210\220\351\203\275\345\244\247\346\225\260\346\215\256.3da8e88f.png"
new file mode 100644
index 0000000..ddf1f7b
Binary files /dev/null and "b/assets/\346\210\220\351\203\275\345\244\247\346\225\260\346\215\256.3da8e88f.png" differ
diff --git "a/assets/\346\210\220\351\203\275\345\244\247\346\225\260\346\215\256.d21c18fc.png" "b/assets/\346\210\220\351\203\275\345\244\247\346\225\260\346\215\256.d21c18fc.png"
deleted file mode 100644
index 3ce430f..0000000
Binary files "a/assets/\346\210\220\351\203\275\345\244\247\346\225\260\346\215\256.d21c18fc.png" and /dev/null differ
diff --git "a/assets/\346\213\233\350\201\224\346\266\210\350\264\271\351\207\221\350\236\215\346\234\211\351\231\220\345\205\254\345\217\270.66cf4318.png" "b/assets/\346\213\233\350\201\224\346\266\210\350\264\271\351\207\221\350\236\215\346\234\211\351\231\220\345\205\254\345\217\270.66cf4318.png"
deleted file mode 100644
index 7a39d07..0000000
Binary files "a/assets/\346\213\233\350\201\224\346\266\210\350\264\271\351\207\221\350\236\215\346\234\211\351\231\220\345\205\254\345\217\270.66cf4318.png" and /dev/null differ
diff --git "a/assets/\346\213\233\350\201\224\346\266\210\350\264\271\351\207\221\350\236\215\346\234\211\351\231\220\345\205\254\345\217\270.903c953e.png" "b/assets/\346\213\233\350\201\224\346\266\210\350\264\271\351\207\221\350\236\215\346\234\211\351\231\220\345\205\254\345\217\270.903c953e.png"
new file mode 100644
index 0000000..d13df16
Binary files /dev/null and "b/assets/\346\213\233\350\201\224\346\266\210\350\264\271\351\207\221\350\236\215\346\234\211\351\231\220\345\205\254\345\217\270.903c953e.png" differ
diff --git "a/assets/\346\235\255\345\267\236\347\224\265\351\255\202\347\275\221\347\273\234.3ec071b8.png" "b/assets/\346\235\255\345\267\236\347\224\265\351\255\202\347\275\221\347\273\234.3ec071b8.png"
deleted file mode 100644
index bc61646..0000000
Binary files "a/assets/\346\235\255\345\267\236\347\224\265\351\255\202\347\275\221\347\273\234.3ec071b8.png" and /dev/null differ
diff --git "a/assets/\346\241\224\345\255\220\345\210\206\346\234\237.55aa406b.png" "b/assets/\346\241\224\345\255\220\345\210\206\346\234\237.55aa406b.png"
deleted file mode 100644
index 3ff45b8..0000000
Binary files "a/assets/\346\241\224\345\255\220\345\210\206\346\234\237.55aa406b.png" and /dev/null differ
diff --git "a/assets/\346\241\224\345\255\220\345\210\206\346\234\237.f980f03b.png" "b/assets/\346\241\224\345\255\220\345\210\206\346\234\237.f980f03b.png"
new file mode 100644
index 0000000..51c5b8d
Binary files /dev/null and "b/assets/\346\241\224\345\255\220\345\210\206\346\234\237.f980f03b.png" differ
diff --git "a/assets/\346\265\267\345\272\267\345\250\201\350\247\206.70f8122b.png" "b/assets/\346\265\267\345\272\267\345\250\201\350\247\206.70f8122b.png"
deleted file mode 100644
index a961cc4..0000000
Binary files "a/assets/\346\265\267\345\272\267\345\250\201\350\247\206.70f8122b.png" and /dev/null differ
diff --git "a/assets/\346\265\267\345\272\267\345\250\201\350\247\206.fb60f896.png" "b/assets/\346\265\267\345\272\267\345\250\201\350\247\206.fb60f896.png"
new file mode 100644
index 0000000..8c0f80f
Binary files /dev/null and "b/assets/\346\265\267\345\272\267\345\250\201\350\247\206.fb60f896.png" differ
diff --git "a/assets/\347\220\206\346\203\263\346\261\275\350\275\246.0123a918.png" "b/assets/\347\220\206\346\203\263\346\261\275\350\275\246.0123a918.png"
deleted file mode 100644
index 3c0c20f..0000000
Binary files "a/assets/\347\220\206\346\203\263\346\261\275\350\275\246.0123a918.png" and /dev/null differ
diff --git "a/assets/\347\220\206\346\203\263\346\261\275\350\275\246.c5e2739b.png" "b/assets/\347\220\206\346\203\263\346\261\275\350\275\246.c5e2739b.png"
new file mode 100644
index 0000000..e4146dc
Binary files /dev/null and "b/assets/\347\220\206\346\203\263\346\261\275\350\275\246.c5e2739b.png" differ
diff --git "a/assets/\347\231\276\346\234\233\344\272\221.77c04429.png" "b/assets/\347\231\276\346\234\233\344\272\221.77c04429.png"
new file mode 100644
index 0000000..2711a7d
Binary files /dev/null and "b/assets/\347\231\276\346\234\233\344\272\221.77c04429.png" differ
diff --git "a/assets/\347\231\276\346\234\233\344\272\221.c2c1293f.png" "b/assets/\347\231\276\346\234\233\344\272\221.c2c1293f.png"
deleted file mode 100644
index 90395c6..0000000
Binary files "a/assets/\347\231\276\346\234\233\344\272\221.c2c1293f.png" and /dev/null differ
diff --git "a/assets/\347\253\213\345\210\233\345\225\206\345\237\216.294fde8b.png" "b/assets/\347\253\213\345\210\233\345\225\206\345\237\216.294fde8b.png"
deleted file mode 100644
index ca71850..0000000
Binary files "a/assets/\347\253\213\345\210\233\345\225\206\345\237\216.294fde8b.png" and /dev/null differ
diff --git "a/assets/\347\253\213\345\210\233\345\225\206\345\237\216.7f44a468.png" "b/assets/\347\253\213\345\210\233\345\225\206\345\237\216.7f44a468.png"
new file mode 100644
index 0000000..3063941
Binary files /dev/null and "b/assets/\347\253\213\345\210\233\345\225\206\345\237\216.7f44a468.png" differ
diff --git "a/assets/\347\272\242\350\261\241\344\272\221\350\205\276.7417b5e6.png" "b/assets/\347\272\242\350\261\241\344\272\221\350\205\276.7417b5e6.png"
deleted file mode 100644
index bd54887..0000000
Binary files "a/assets/\347\272\242\350\261\241\344\272\221\350\205\276.7417b5e6.png" and /dev/null differ
diff --git "a/assets/\347\272\242\350\261\241\344\272\221\350\205\276.929a5839.png" "b/assets/\347\272\242\350\261\241\344\272\221\350\205\276.929a5839.png"
new file mode 100644
index 0000000..68b758f
Binary files /dev/null and "b/assets/\347\272\242\350\261\241\344\272\221\350\205\276.929a5839.png" differ
diff --git "a/assets/\350\201\224\345\210\233\346\231\272\350\236\215.188edcec.png" "b/assets/\350\201\224\345\210\233\346\231\272\350\236\215.188edcec.png"
deleted file mode 100644
index 1320cbe..0000000
Binary files "a/assets/\350\201\224\345\210\233\346\231\272\350\236\215.188edcec.png" and /dev/null differ
diff --git "a/assets/\350\201\224\345\210\233\346\231\272\350\236\215.808a8eaa.png" "b/assets/\350\201\224\345\210\233\346\231\272\350\236\215.808a8eaa.png"
new file mode 100644
index 0000000..721337f
Binary files /dev/null and "b/assets/\350\201\224\345\210\233\346\231\272\350\236\215.808a8eaa.png" differ
diff --git "a/assets/\350\210\252\345\244\251\344\277\241\346\201\257.23b0d23c.png" "b/assets/\350\210\252\345\244\251\344\277\241\346\201\257.23b0d23c.png"
deleted file mode 100644
index 73b7589..0000000
Binary files "a/assets/\350\210\252\345\244\251\344\277\241\346\201\257.23b0d23c.png" and /dev/null differ
diff --git "a/assets/\350\210\252\345\244\251\344\277\241\346\201\257.e12022d3.png" "b/assets/\350\210\252\345\244\251\344\277\241\346\201\257.e12022d3.png"
new file mode 100644
index 0000000..7b874cb
Binary files /dev/null and "b/assets/\350\210\252\345\244\251\344\277\241\346\201\257.e12022d3.png" differ
diff --git "a/assets/\350\211\276\344\275\263\347\224\237\346\264\273.26403b56.png" "b/assets/\350\211\276\344\275\263\347\224\237\346\264\273.26403b56.png"
new file mode 100644
index 0000000..6a6105c
Binary files /dev/null and "b/assets/\350\211\276\344\275\263\347\224\237\346\264\273.26403b56.png" differ
diff --git "a/assets/\350\211\276\344\275\263\347\224\237\346\264\273.b508c1dc.jpg" "b/assets/\350\211\276\344\275\263\347\224\237\346\264\273.b508c1dc.jpg"
deleted file mode 100644
index ab32413..0000000
Binary files "a/assets/\350\211\276\344\275\263\347\224\237\346\264\273.b508c1dc.jpg" and /dev/null differ
diff --git "a/assets/\350\215\243\350\200\200.5a89cf66.png" "b/assets/\350\215\243\350\200\200.5a89cf66.png"
new file mode 100644
index 0000000..0dd0ad1
Binary files /dev/null and "b/assets/\350\215\243\350\200\200.5a89cf66.png" differ
diff --git "a/assets/\350\215\243\350\200\200.ceda8b1e.png" "b/assets/\350\215\243\350\200\200.ceda8b1e.png"
deleted file mode 100644
index 31fb7cd..0000000
Binary files "a/assets/\350\215\243\350\200\200.ceda8b1e.png" and /dev/null differ
diff --git "a/assets/\350\220\250\346\221\251\350\200\266\344\272\221.36d45d17.png" "b/assets/\350\220\250\346\221\251\350\200\266\344\272\221.36d45d17.png"
new file mode 100644
index 0000000..bffbaa7
Binary files /dev/null and "b/assets/\350\220\250\346\221\251\350\200\266\344\272\221.36d45d17.png" differ
diff --git "a/assets/\350\220\250\346\221\251\350\200\266\344\272\221.63ed5828.png" "b/assets/\350\220\250\346\221\251\350\200\266\344\272\221.63ed5828.png"
deleted file mode 100644
index 5d02160..0000000
Binary files "a/assets/\350\220\250\346\221\251\350\200\266\344\272\221.63ed5828.png" and /dev/null differ
diff --git "a/assets/\350\224\232\346\235\245\346\261\275\350\275\246.422c536e.png" "b/assets/\350\224\232\346\235\245\346\261\275\350\275\246.422c536e.png"
new file mode 100644
index 0000000..1669815
Binary files /dev/null and "b/assets/\350\224\232\346\235\245\346\261\275\350\275\246.422c536e.png" differ
diff --git "a/assets/\350\224\232\346\235\245\346\261\275\350\275\246.be672a01.jpg" "b/assets/\350\224\232\346\235\245\346\261\275\350\275\246.be672a01.jpg"
deleted file mode 100644
index c1df2ac..0000000
Binary files "a/assets/\350\224\232\346\235\245\346\261\275\350\275\246.be672a01.jpg" and /dev/null differ
diff --git "a/assets/\350\245\277\345\256\211\347\224\265\345\255\220\347\247\221\346\212\200\345\244\247\345\255\246.3762b76e.jpg" "b/assets/\350\245\277\345\256\211\347\224\265\345\255\220\347\247\221\346\212\200\345\244\247\345\255\246.3762b76e.jpg"
deleted file mode 100644
index dc37326..0000000
Binary files "a/assets/\350\245\277\345\256\211\347\224\265\345\255\220\347\247\221\346\212\200\345\244\247\345\255\246.3762b76e.jpg" and /dev/null differ
diff --git "a/assets/\350\245\277\345\256\211\347\224\265\345\255\220\347\247\221\346\212\200\345\244\247\345\255\246.b4ea0700.png" "b/assets/\350\245\277\345\256\211\347\224\265\345\255\220\347\247\221\346\212\200\345\244\247\345\255\246.b4ea0700.png"
new file mode 100644
index 0000000..860b3a4
Binary files /dev/null and "b/assets/\350\245\277\345\256\211\347\224\265\345\255\220\347\247\221\346\212\200\345\244\247\345\255\246.b4ea0700.png" differ
diff --git "a/assets/\351\241\266\347\202\271\350\275\257\344\273\266.389df8d5.png" "b/assets/\351\241\266\347\202\271\350\275\257\344\273\266.389df8d5.png"
deleted file mode 100644
index 8e80dd0..0000000
Binary files "a/assets/\351\241\266\347\202\271\350\275\257\344\273\266.389df8d5.png" and /dev/null differ
diff --git "a/assets/\351\241\266\347\202\271\350\275\257\344\273\266.e6044237.png" "b/assets/\351\241\266\347\202\271\350\275\257\344\273\266.e6044237.png"
new file mode 100644
index 0000000..517bac1
Binary files /dev/null and "b/assets/\351\241\266\347\202\271\350\275\257\344\273\266.e6044237.png" differ
diff --git a/index.html b/index.html
index d64f040..d59fbc2 100644
--- a/index.html
+++ b/index.html
@@ -1,13 +1,14 @@
 <!DOCTYPE html>
+<!-- index page -->
 <html lang="en">
   <head>
     <meta charset="UTF-8" />
     <link rel="icon" href="/favicon.ico" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
     <title>Apache Linkis</title>
-    <script type="module" crossorigin src="/assets/index.8d1f9740.js"></script>
-    <link rel="modulepreload" href="/assets/vendor.12a5b039.js">
-    <link rel="stylesheet" href="/assets/index.82f016e4.css">
+    <script type="module" crossorigin src="/assets/index.c319b82e.js"></script>
+    <link rel="modulepreload" href="/assets/vendor.1180558b.js">
+    <link rel="stylesheet" href="/assets/index.2b54ad83.css">
   </head>
   <body>
     <div id="app"></div>

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 36/50: UPDATE: 压缩图片

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit c6345d9ee5a540fd4fb2392687cce8899450ca6c
Author: lucaszhu <lu...@webank.com>
AuthorDate: Mon Oct 18 15:05:37 2021 +0800

    UPDATE: 压缩图片
---
 src/assets/docs/EngineUsage/hive-config.png        | Bin 86864 -> 44717 bytes
 src/assets/docs/EngineUsage/hive-run.png           | Bin 94294 -> 31403 bytes
 src/assets/docs/EngineUsage/jdbc-conf.png          | Bin 91609 -> 46113 bytes
 src/assets/docs/EngineUsage/jdbc-run.png           | Bin 56438 -> 21937 bytes
 src/assets/docs/EngineUsage/pyspakr-run.png        | Bin 124979 -> 43552 bytes
 src/assets/docs/EngineUsage/python-config.png      | Bin 92997 -> 47021 bytes
 src/assets/docs/EngineUsage/python-run.png         | Bin 89641 -> 61451 bytes
 src/assets/docs/EngineUsage/queue-set.png          | Bin 93935 -> 41298 bytes
 src/assets/docs/EngineUsage/scala-run.png          | Bin 125060 -> 43959 bytes
 src/assets/docs/EngineUsage/shell-run.png          | Bin 209553 -> 100312 bytes
 src/assets/docs/EngineUsage/spark-conf.png         | Bin 99930 -> 53397 bytes
 src/assets/docs/EngineUsage/sparksql-run.png       | Bin 121699 -> 46611 bytes
 src/assets/docs/EngineUsage/workflow.png           | Bin 151481 -> 51259 bytes
 src/assets/docs/Tuning_and_Troubleshooting/Q&A.png | Bin 161638 -> 72259 bytes
 .../Tuning_and_Troubleshooting/code-fix-01.png     | Bin 199523 -> 61855 bytes
 .../Tuning_and_Troubleshooting/db-config-01.png    | Bin 391789 -> 157843 bytes
 .../Tuning_and_Troubleshooting/db-config-02.png    | Bin 60334 -> 22153 bytes
 .../docs/Tuning_and_Troubleshooting/debug-01.png   | Bin 6168 -> 3258 bytes
 .../docs/Tuning_and_Troubleshooting/debug-02.png   | Bin 62496 -> 25521 bytes
 .../docs/Tuning_and_Troubleshooting/debug-03.png   | Bin 32875 -> 14953 bytes
 .../docs/Tuning_and_Troubleshooting/debug-04.png   | Bin 111758 -> 34622 bytes
 .../docs/Tuning_and_Troubleshooting/debug-05.png   | Bin 52040 -> 20848 bytes
 .../docs/Tuning_and_Troubleshooting/debug-06.png   | Bin 63668 -> 25477 bytes
 .../docs/Tuning_and_Troubleshooting/debug-07.png   | Bin 316176 -> 113342 bytes
 .../docs/Tuning_and_Troubleshooting/debug-08.png   | Bin 27722 -> 12338 bytes
 .../Tuning_and_Troubleshooting/hive-config-01.png  | Bin 76327 -> 27332 bytes
 .../linkis-exception-01.png                        | Bin 1199628 -> 457236 bytes
 .../linkis-exception-02.png                        | Bin 1366293 -> 524390 bytes
 .../linkis-exception-03.png                        | Bin 646836 -> 264782 bytes
 .../linkis-exception-04.png                        | Bin 2965676 -> 1014902 bytes
 .../linkis-exception-05.png                        | Bin 454949 -> 207746 bytes
 .../linkis-exception-06.png                        | Bin 869492 -> 348016 bytes
 .../linkis-exception-07.png                        | Bin 2249882 -> 842448 bytes
 .../linkis-exception-08.png                        | Bin 1191728 -> 499442 bytes
 .../linkis-exception-09.png                        | Bin 1008341 -> 442648 bytes
 .../linkis-exception-10.png                        | Bin 322110 -> 149801 bytes
 .../Tuning_and_Troubleshooting/page-show-01.png    | Bin 115010 -> 39986 bytes
 .../Tuning_and_Troubleshooting/page-show-02.png    | Bin 576911 -> 220102 bytes
 .../Tuning_and_Troubleshooting/page-show-03.png    | Bin 654609 -> 230234 bytes
 .../searching_keywords.png                         | Bin 102094 -> 53652 bytes
 .../Tuning_and_Troubleshooting/shell-error-01.png  | Bin 74682 -> 30629 bytes
 .../Tuning_and_Troubleshooting/shell-error-02.png  | Bin 330735 -> 117077 bytes
 .../Tuning_and_Troubleshooting/shell-error-03.png  | Bin 1624375 -> 516777 bytes
 .../Tuning_and_Troubleshooting/shell-error-04.png  | Bin 803920 -> 318990 bytes
 .../Tuning_and_Troubleshooting/shell-error-05.png  | Bin 179543 -> 60031 bytes
 .../docs/Tunning_And_Troubleshooting/debug-01.png  | Bin 6168 -> 3258 bytes
 .../docs/Tunning_And_Troubleshooting/debug-02.png  | Bin 62496 -> 25521 bytes
 .../docs/Tunning_And_Troubleshooting/debug-03.png  | Bin 32875 -> 14953 bytes
 .../docs/Tunning_And_Troubleshooting/debug-04.png  | Bin 111758 -> 34622 bytes
 .../docs/Tunning_And_Troubleshooting/debug-05.png  | Bin 52040 -> 20848 bytes
 .../docs/Tunning_And_Troubleshooting/debug-06.png  | Bin 63668 -> 25477 bytes
 .../docs/Tunning_And_Troubleshooting/debug-07.png  | Bin 316176 -> 113342 bytes
 .../docs/Tunning_And_Troubleshooting/debug-08.png  | Bin 27722 -> 12338 bytes
 .../Gateway/gateway_server_dispatcher.png          | Bin 83743 -> 47910 bytes
 .../architecture/Gateway/gateway_server_global.png | Bin 85272 -> 36652 bytes
 .../docs/architecture/Gateway/gatway_websocket.png | Bin 37769 -> 16292 bytes
 .../docs/architecture/JobSubmission/execution.png  | Bin 31078 -> 15175 bytes
 .../architecture/JobSubmission/orchestrate.png     | Bin 31095 -> 15181 bytes
 .../docs/architecture/JobSubmission/overall.png    | Bin 231192 -> 95786 bytes
 .../architecture/JobSubmission/physical_tree.png   | Bin 79471 -> 30715 bytes
 .../JobSubmission/result_acquisition.png           | Bin 41007 -> 21797 bytes
 .../docs/architecture/JobSubmission/submission.png | Bin 12946 -> 7309 bytes
 .../Linkis0.X_newengine_architecture.png           | Bin 244826 -> 88465 bytes
 .../docs/architecture/Linkis0.X_services_list.png  | Bin 66821 -> 33522 bytes
 .../docs/architecture/Linkis1.0_architecture.png   | Bin 212362 -> 72168 bytes
 .../Linkis1.0_engineconn_architecture.png          | Bin 157753 -> 55737 bytes
 .../Linkis1.0_newengine_architecture.png           | Bin 26523 -> 13058 bytes
 .../Linkis1.0_newengine_initialization.png         | Bin 48313 -> 24619 bytes
 .../docs/architecture/Linkis1.0_services_list.png  | Bin 85890 -> 35596 bytes
 .../ContextService/linkis-contextservice-01.png    | Bin 22692 -> 9188 bytes
 .../ContextService/linkis-contextservice-02.png    | Bin 10655 -> 4953 bytes
 .../linkis-contextservice-cache-01.png             | Bin 11881 -> 5500 bytes
 .../linkis-contextservice-cache-02.png             | Bin 23902 -> 11546 bytes
 .../linkis-contextservice-cache-03.png             | Bin 109334 -> 53416 bytes
 .../linkis-contextservice-cache-04.png             | Bin 36161 -> 15785 bytes
 .../linkis-contextservice-cache-05.png             | Bin 2265 -> 1488 bytes
 .../linkis-contextservice-client-01.png            | Bin 54438 -> 18839 bytes
 .../linkis-contextservice-client-02.png            | Bin 93036 -> 30023 bytes
 .../linkis-contextservice-client-03.png            | Bin 34839 -> 11690 bytes
 .../ContextService/linkis-contextservice-ha-01.png | Bin 38439 -> 17605 bytes
 .../ContextService/linkis-contextservice-ha-02.png | Bin 21982 -> 10781 bytes
 .../ContextService/linkis-contextservice-ha-03.png | Bin 91788 -> 41714 bytes
 .../ContextService/linkis-contextservice-ha-04.png | Bin 40733 -> 17550 bytes
 .../linkis-contextservice-listener-01.png          | Bin 24414 -> 14209 bytes
 .../linkis-contextservice-listener-02.png          | Bin 46152 -> 21055 bytes
 .../linkis-contextservice-listener-03.png          | Bin 32597 -> 17902 bytes
 .../linkis-contextservice-persistence-01.png       | Bin 198797 -> 107735 bytes
 .../linkis-contextservice-search-01.png            | Bin 33731 -> 11874 bytes
 .../linkis-contextservice-search-02.png            | Bin 26768 -> 8266 bytes
 .../linkis-contextservice-search-03.png            | Bin 33312 -> 11321 bytes
 .../linkis-contextservice-search-04.png            | Bin 25192 -> 9101 bytes
 .../linkis-contextservice-search-05.png            | Bin 24757 -> 9133 bytes
 .../linkis-contextservice-search-06.png            | Bin 29923 -> 11334 bytes
 .../linkis-contextservice-search-07.png            | Bin 30013 -> 11391 bytes
 .../linkis-contextservice-service-01.png           | Bin 56235 -> 27470 bytes
 .../linkis-contextservice-service-02.png           | Bin 73463 -> 37730 bytes
 .../linkis-contextservice-service-03.png           | Bin 23477 -> 12269 bytes
 .../linkis-contextservice-service-04.png           | Bin 27387 -> 13462 bytes
 .../architecture/linkis_engineconnplugin_01.png    | Bin 21864 -> 8146 bytes
 src/assets/docs/architecture/linkis_intro_01.png   | Bin 413878 -> 142195 bytes
 src/assets/docs/architecture/linkis_intro_02.png   | Bin 355186 -> 102080 bytes
 .../architecture/linkis_microservice_gov_01.png    | Bin 109909 -> 46380 bytes
 .../architecture/linkis_microservice_gov_03.png    | Bin 83457 -> 30388 bytes
 .../docs/architecture/linkis_publicservice_01.png  | Bin 62443 -> 25269 bytes
 .../publicenhencement_architecture.png             | Bin 47158 -> 24844 bytes
 .../docs/deploy/Linkis1.0_combined_eureka.png      | Bin 134418 -> 55811 bytes
 src/assets/docs/deploy/distributed_deployment.png  | Bin 130148 -> 49045 bytes
 .../docs/deployment/Linkis1.0_combined_eureka.png  | Bin 134418 -> 55811 bytes
 .../docs/manual/ECM_all_engine_information.png     | Bin 89529 -> 35445 bytes
 src/assets/docs/manual/ECM_editing_interface.png   | Bin 64470 -> 22467 bytes
 .../docs/manual/ECM_management_interface.png       | Bin 43765 -> 17721 bytes
 src/assets/docs/manual/administrator_view.png      | Bin 80087 -> 34121 bytes
 ...he_instance_name_to_view_engine_information.png | Bin 41814 -> 15085 bytes
 src/assets/docs/manual/edit_directory.png          | Bin 89919 -> 37047 bytes
 .../docs/manual/eureka_registration_center.png     | Bin 327966 -> 85144 bytes
 .../docs/manual/global_history_interface.png       | Bin 82340 -> 33772 bytes
 .../docs/manual/global_history_query_button.png    | Bin 81788 -> 33347 bytes
 .../docs/manual/global_variable_interface.png      | Bin 40073 -> 12185 bytes
 .../manual/microservice_management_interface.png   | Bin 39198 -> 14527 bytes
 src/assets/docs/manual/new_application_type.png    | Bin 108864 -> 42139 bytes
 .../manual/parameter_configuration_interface.png   | Bin 79698 -> 32415 bytes
 src/assets/docs/manual/queue_set.png               | Bin 93935 -> 42827 bytes
 .../docs/manual/resource_management_interface.png  | Bin 49277 -> 18044 bytes
 src/assets/docs/manual/sparksql_run.png            | Bin 121699 -> 46611 bytes
 .../manual/task_execution_log_of_a_single_task.png | Bin 114314 -> 55931 bytes
 src/assets/docs/manual/workflow.png                | Bin 151481 -> 51259 bytes
 src/assets/fqa/Q&A.png                             | Bin 161638 -> 72259 bytes
 src/assets/fqa/code-fix-01.png                     | Bin 199523 -> 61855 bytes
 src/assets/fqa/db-config-01.png                    | Bin 391789 -> 157843 bytes
 src/assets/fqa/db-config-02.png                    | Bin 60334 -> 22153 bytes
 src/assets/fqa/debug-01.png                        | Bin 6168 -> 3258 bytes
 src/assets/fqa/debug-02.png                        | Bin 62496 -> 25521 bytes
 src/assets/fqa/debug-03.png                        | Bin 32875 -> 14953 bytes
 src/assets/fqa/debug-04.png                        | Bin 111758 -> 34622 bytes
 src/assets/fqa/debug-05.png                        | Bin 52040 -> 20848 bytes
 src/assets/fqa/debug-06.png                        | Bin 63668 -> 25477 bytes
 src/assets/fqa/debug-07.png                        | Bin 316176 -> 113342 bytes
 src/assets/fqa/debug-08.png                        | Bin 27722 -> 12338 bytes
 src/assets/fqa/hive-config-01.png                  | Bin 76327 -> 27332 bytes
 src/assets/fqa/linkis-exception-01.png             | Bin 1199628 -> 457236 bytes
 src/assets/fqa/linkis-exception-02.png             | Bin 1366293 -> 524390 bytes
 src/assets/fqa/linkis-exception-03.png             | Bin 646836 -> 264782 bytes
 src/assets/fqa/linkis-exception-04.png             | Bin 2965676 -> 1014902 bytes
 src/assets/fqa/linkis-exception-05.png             | Bin 454949 -> 207746 bytes
 src/assets/fqa/linkis-exception-06.png             | Bin 869492 -> 348016 bytes
 src/assets/fqa/linkis-exception-07.png             | Bin 2249882 -> 842448 bytes
 src/assets/fqa/linkis-exception-08.png             | Bin 1191728 -> 499442 bytes
 src/assets/fqa/linkis-exception-09.png             | Bin 1008341 -> 442648 bytes
 src/assets/fqa/linkis-exception-10.png             | Bin 322110 -> 149801 bytes
 src/assets/fqa/page-show-01.png                    | Bin 115010 -> 39986 bytes
 src/assets/fqa/page-show-02.png                    | Bin 576911 -> 220102 bytes
 src/assets/fqa/page-show-03.png                    | Bin 654609 -> 230234 bytes
 src/assets/fqa/searching_keywords.png              | Bin 102094 -> 53652 bytes
 src/assets/fqa/shell-error-01.png                  | Bin 74682 -> 30629 bytes
 src/assets/fqa/shell-error-02.png                  | Bin 330735 -> 117077 bytes
 src/assets/fqa/shell-error-03.png                  | Bin 1624375 -> 516777 bytes
 src/assets/fqa/shell-error-04.png                  | Bin 803920 -> 318990 bytes
 src/assets/fqa/shell-error-05.png                  | Bin 179543 -> 60031 bytes
 src/assets/home/after_linkis_en.png                | Bin 481170 -> 111924 bytes
 src/assets/home/after_linkis_zh.png                | Bin 645519 -> 188079 bytes
 src/assets/home/before_linkis_en.png               | Bin 508718 -> 142195 bytes
 src/assets/home/before_linkis_zh.png               | Bin 332201 -> 101665 bytes
 src/assets/home/description.png                    | Bin 73910 -> 28065 bytes
 163 files changed, 0 insertions(+), 0 deletions(-)

diff --git a/src/assets/docs/EngineUsage/hive-config.png b/src/assets/docs/EngineUsage/hive-config.png
index 9b3df01..7f1bcfe 100644
Binary files a/src/assets/docs/EngineUsage/hive-config.png and b/src/assets/docs/EngineUsage/hive-config.png differ
diff --git a/src/assets/docs/EngineUsage/hive-run.png b/src/assets/docs/EngineUsage/hive-run.png
index 287b1ab..7aca9b3 100644
Binary files a/src/assets/docs/EngineUsage/hive-run.png and b/src/assets/docs/EngineUsage/hive-run.png differ
diff --git a/src/assets/docs/EngineUsage/jdbc-conf.png b/src/assets/docs/EngineUsage/jdbc-conf.png
index 39397d3..605a006 100644
Binary files a/src/assets/docs/EngineUsage/jdbc-conf.png and b/src/assets/docs/EngineUsage/jdbc-conf.png differ
diff --git a/src/assets/docs/EngineUsage/jdbc-run.png b/src/assets/docs/EngineUsage/jdbc-run.png
index fe51598..2e0f47e 100644
Binary files a/src/assets/docs/EngineUsage/jdbc-run.png and b/src/assets/docs/EngineUsage/jdbc-run.png differ
diff --git a/src/assets/docs/EngineUsage/pyspakr-run.png b/src/assets/docs/EngineUsage/pyspakr-run.png
index c80c85b..fd0cf54 100644
Binary files a/src/assets/docs/EngineUsage/pyspakr-run.png and b/src/assets/docs/EngineUsage/pyspakr-run.png differ
diff --git a/src/assets/docs/EngineUsage/python-config.png b/src/assets/docs/EngineUsage/python-config.png
index 2bf1791..bb05e24 100644
Binary files a/src/assets/docs/EngineUsage/python-config.png and b/src/assets/docs/EngineUsage/python-config.png differ
diff --git a/src/assets/docs/EngineUsage/python-run.png b/src/assets/docs/EngineUsage/python-run.png
index 65467af..8b1c97c 100644
Binary files a/src/assets/docs/EngineUsage/python-run.png and b/src/assets/docs/EngineUsage/python-run.png differ
diff --git a/src/assets/docs/EngineUsage/queue-set.png b/src/assets/docs/EngineUsage/queue-set.png
index 735a670..e818025 100644
Binary files a/src/assets/docs/EngineUsage/queue-set.png and b/src/assets/docs/EngineUsage/queue-set.png differ
diff --git a/src/assets/docs/EngineUsage/scala-run.png b/src/assets/docs/EngineUsage/scala-run.png
index 7c01aad..a469a1e 100644
Binary files a/src/assets/docs/EngineUsage/scala-run.png and b/src/assets/docs/EngineUsage/scala-run.png differ
diff --git a/src/assets/docs/EngineUsage/shell-run.png b/src/assets/docs/EngineUsage/shell-run.png
index 734bdb2..de28817 100644
Binary files a/src/assets/docs/EngineUsage/shell-run.png and b/src/assets/docs/EngineUsage/shell-run.png differ
diff --git a/src/assets/docs/EngineUsage/spark-conf.png b/src/assets/docs/EngineUsage/spark-conf.png
index 353dbd6..a0a07d0 100644
Binary files a/src/assets/docs/EngineUsage/spark-conf.png and b/src/assets/docs/EngineUsage/spark-conf.png differ
diff --git a/src/assets/docs/EngineUsage/sparksql-run.png b/src/assets/docs/EngineUsage/sparksql-run.png
index f0b1d1b..41f8ff3 100644
Binary files a/src/assets/docs/EngineUsage/sparksql-run.png and b/src/assets/docs/EngineUsage/sparksql-run.png differ
diff --git a/src/assets/docs/EngineUsage/workflow.png b/src/assets/docs/EngineUsage/workflow.png
index 3a5919f..cdfce19 100644
Binary files a/src/assets/docs/EngineUsage/workflow.png and b/src/assets/docs/EngineUsage/workflow.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/Q&A.png b/src/assets/docs/Tuning_and_Troubleshooting/Q&A.png
index 121d7f3..7ea8fb8 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/Q&A.png and b/src/assets/docs/Tuning_and_Troubleshooting/Q&A.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/code-fix-01.png b/src/assets/docs/Tuning_and_Troubleshooting/code-fix-01.png
index 27bdddb..394ef17 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/code-fix-01.png and b/src/assets/docs/Tuning_and_Troubleshooting/code-fix-01.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/db-config-01.png b/src/assets/docs/Tuning_and_Troubleshooting/db-config-01.png
index fa1f1c8..37fbc8c 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/db-config-01.png and b/src/assets/docs/Tuning_and_Troubleshooting/db-config-01.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/db-config-02.png b/src/assets/docs/Tuning_and_Troubleshooting/db-config-02.png
index c2f8443..93dd5c4 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/db-config-02.png and b/src/assets/docs/Tuning_and_Troubleshooting/db-config-02.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/debug-01.png b/src/assets/docs/Tuning_and_Troubleshooting/debug-01.png
index 9834b3d..beba6ae 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/debug-01.png and b/src/assets/docs/Tuning_and_Troubleshooting/debug-01.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/debug-02.png b/src/assets/docs/Tuning_and_Troubleshooting/debug-02.png
index c7621b5..48feb77 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/debug-02.png and b/src/assets/docs/Tuning_and_Troubleshooting/debug-02.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/debug-03.png b/src/assets/docs/Tuning_and_Troubleshooting/debug-03.png
index 16788c3..fb98ad3 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/debug-03.png and b/src/assets/docs/Tuning_and_Troubleshooting/debug-03.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/debug-04.png b/src/assets/docs/Tuning_and_Troubleshooting/debug-04.png
index cb944ee..247e871 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/debug-04.png and b/src/assets/docs/Tuning_and_Troubleshooting/debug-04.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/debug-05.png b/src/assets/docs/Tuning_and_Troubleshooting/debug-05.png
index 2c5972c..890d522 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/debug-05.png and b/src/assets/docs/Tuning_and_Troubleshooting/debug-05.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/debug-06.png b/src/assets/docs/Tuning_and_Troubleshooting/debug-06.png
index a64cec6..8480246 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/debug-06.png and b/src/assets/docs/Tuning_and_Troubleshooting/debug-06.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/debug-07.png b/src/assets/docs/Tuning_and_Troubleshooting/debug-07.png
index 935d5bc..2d020a8 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/debug-07.png and b/src/assets/docs/Tuning_and_Troubleshooting/debug-07.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/debug-08.png b/src/assets/docs/Tuning_and_Troubleshooting/debug-08.png
index d2a3328..a7bfd75 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/debug-08.png and b/src/assets/docs/Tuning_and_Troubleshooting/debug-08.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/hive-config-01.png b/src/assets/docs/Tuning_and_Troubleshooting/hive-config-01.png
index 6bd0edb..56d833f 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/hive-config-01.png and b/src/assets/docs/Tuning_and_Troubleshooting/hive-config-01.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-01.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-01.png
index 01090d1..86e22ed 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-01.png and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-01.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-02.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-02.png
index 0f68f12..d0ec7de 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-02.png and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-02.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-03.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-03.png
index 8fb4464..f85c17c 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-03.png and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-03.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-04.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-04.png
index 5635a20..975936c 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-04.png and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-04.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-05.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-05.png
index c341a9d..bb19a10 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-05.png and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-05.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-06.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-06.png
index b0624ef..1b3bc0c 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-06.png and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-06.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-07.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-07.png
index 402f0c9..7070118 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-07.png and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-07.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-08.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-08.png
index 27c1824..2f8d5ec 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-08.png and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-08.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-09.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-09.png
index 5b27b4b..11f1e24 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-09.png and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-09.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-10.png b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-10.png
index 7c361e7..fe8e4ec 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-10.png and b/src/assets/docs/Tuning_and_Troubleshooting/linkis-exception-10.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/page-show-01.png b/src/assets/docs/Tuning_and_Troubleshooting/page-show-01.png
index d953cb6..22c3a3e 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/page-show-01.png and b/src/assets/docs/Tuning_and_Troubleshooting/page-show-01.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/page-show-02.png b/src/assets/docs/Tuning_and_Troubleshooting/page-show-02.png
index af273bb..5330dfe 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/page-show-02.png and b/src/assets/docs/Tuning_and_Troubleshooting/page-show-02.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/page-show-03.png b/src/assets/docs/Tuning_and_Troubleshooting/page-show-03.png
index c36bb30..7d6e11d 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/page-show-03.png and b/src/assets/docs/Tuning_and_Troubleshooting/page-show-03.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/searching_keywords.png b/src/assets/docs/Tuning_and_Troubleshooting/searching_keywords.png
index cada716..f578266 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/searching_keywords.png and b/src/assets/docs/Tuning_and_Troubleshooting/searching_keywords.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/shell-error-01.png b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-01.png
index 910150e..bbc402f 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/shell-error-01.png and b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-01.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/shell-error-02.png b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-02.png
index 71d5e7e..379f016 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/shell-error-02.png and b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-02.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/shell-error-03.png b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-03.png
index 4bb9cfe..82fd1d6 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/shell-error-03.png and b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-03.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/shell-error-04.png b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-04.png
index c2df857..d5b21b7 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/shell-error-04.png and b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-04.png differ
diff --git a/src/assets/docs/Tuning_and_Troubleshooting/shell-error-05.png b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-05.png
index 3635584..59b82b6 100644
Binary files a/src/assets/docs/Tuning_and_Troubleshooting/shell-error-05.png and b/src/assets/docs/Tuning_and_Troubleshooting/shell-error-05.png differ
diff --git a/src/assets/docs/Tunning_And_Troubleshooting/debug-01.png b/src/assets/docs/Tunning_And_Troubleshooting/debug-01.png
index 9834b3d..beba6ae 100644
Binary files a/src/assets/docs/Tunning_And_Troubleshooting/debug-01.png and b/src/assets/docs/Tunning_And_Troubleshooting/debug-01.png differ
diff --git a/src/assets/docs/Tunning_And_Troubleshooting/debug-02.png b/src/assets/docs/Tunning_And_Troubleshooting/debug-02.png
index c7621b5..48feb77 100644
Binary files a/src/assets/docs/Tunning_And_Troubleshooting/debug-02.png and b/src/assets/docs/Tunning_And_Troubleshooting/debug-02.png differ
diff --git a/src/assets/docs/Tunning_And_Troubleshooting/debug-03.png b/src/assets/docs/Tunning_And_Troubleshooting/debug-03.png
index 16788c3..fb98ad3 100644
Binary files a/src/assets/docs/Tunning_And_Troubleshooting/debug-03.png and b/src/assets/docs/Tunning_And_Troubleshooting/debug-03.png differ
diff --git a/src/assets/docs/Tunning_And_Troubleshooting/debug-04.png b/src/assets/docs/Tunning_And_Troubleshooting/debug-04.png
index cb944ee..247e871 100644
Binary files a/src/assets/docs/Tunning_And_Troubleshooting/debug-04.png and b/src/assets/docs/Tunning_And_Troubleshooting/debug-04.png differ
diff --git a/src/assets/docs/Tunning_And_Troubleshooting/debug-05.png b/src/assets/docs/Tunning_And_Troubleshooting/debug-05.png
index 2c5972c..890d522 100644
Binary files a/src/assets/docs/Tunning_And_Troubleshooting/debug-05.png and b/src/assets/docs/Tunning_And_Troubleshooting/debug-05.png differ
diff --git a/src/assets/docs/Tunning_And_Troubleshooting/debug-06.png b/src/assets/docs/Tunning_And_Troubleshooting/debug-06.png
index a64cec6..8480246 100644
Binary files a/src/assets/docs/Tunning_And_Troubleshooting/debug-06.png and b/src/assets/docs/Tunning_And_Troubleshooting/debug-06.png differ
diff --git a/src/assets/docs/Tunning_And_Troubleshooting/debug-07.png b/src/assets/docs/Tunning_And_Troubleshooting/debug-07.png
index 935d5bc..2d020a8 100644
Binary files a/src/assets/docs/Tunning_And_Troubleshooting/debug-07.png and b/src/assets/docs/Tunning_And_Troubleshooting/debug-07.png differ
diff --git a/src/assets/docs/Tunning_And_Troubleshooting/debug-08.png b/src/assets/docs/Tunning_And_Troubleshooting/debug-08.png
index d2a3328..a7bfd75 100644
Binary files a/src/assets/docs/Tunning_And_Troubleshooting/debug-08.png and b/src/assets/docs/Tunning_And_Troubleshooting/debug-08.png differ
diff --git a/src/assets/docs/architecture/Gateway/gateway_server_dispatcher.png b/src/assets/docs/architecture/Gateway/gateway_server_dispatcher.png
index 9cdc918..8d182f3 100644
Binary files a/src/assets/docs/architecture/Gateway/gateway_server_dispatcher.png and b/src/assets/docs/architecture/Gateway/gateway_server_dispatcher.png differ
diff --git a/src/assets/docs/architecture/Gateway/gateway_server_global.png b/src/assets/docs/architecture/Gateway/gateway_server_global.png
index 584574e..f0f468a 100644
Binary files a/src/assets/docs/architecture/Gateway/gateway_server_global.png and b/src/assets/docs/architecture/Gateway/gateway_server_global.png differ
diff --git a/src/assets/docs/architecture/Gateway/gatway_websocket.png b/src/assets/docs/architecture/Gateway/gatway_websocket.png
index fcac318..0144416 100644
Binary files a/src/assets/docs/architecture/Gateway/gatway_websocket.png and b/src/assets/docs/architecture/Gateway/gatway_websocket.png differ
diff --git a/src/assets/docs/architecture/JobSubmission/execution.png b/src/assets/docs/architecture/JobSubmission/execution.png
index 1abc43b..e0c873c 100644
Binary files a/src/assets/docs/architecture/JobSubmission/execution.png and b/src/assets/docs/architecture/JobSubmission/execution.png differ
diff --git a/src/assets/docs/architecture/JobSubmission/orchestrate.png b/src/assets/docs/architecture/JobSubmission/orchestrate.png
index 9de0a5d..3d0cc64 100644
Binary files a/src/assets/docs/architecture/JobSubmission/orchestrate.png and b/src/assets/docs/architecture/JobSubmission/orchestrate.png differ
diff --git a/src/assets/docs/architecture/JobSubmission/overall.png b/src/assets/docs/architecture/JobSubmission/overall.png
index 68b5e19..537c757 100644
Binary files a/src/assets/docs/architecture/JobSubmission/overall.png and b/src/assets/docs/architecture/JobSubmission/overall.png differ
diff --git a/src/assets/docs/architecture/JobSubmission/physical_tree.png b/src/assets/docs/architecture/JobSubmission/physical_tree.png
index 7998704..0201242 100644
Binary files a/src/assets/docs/architecture/JobSubmission/physical_tree.png and b/src/assets/docs/architecture/JobSubmission/physical_tree.png differ
diff --git a/src/assets/docs/architecture/JobSubmission/result_acquisition.png b/src/assets/docs/architecture/JobSubmission/result_acquisition.png
index c2dd9f3..5addf16 100644
Binary files a/src/assets/docs/architecture/JobSubmission/result_acquisition.png and b/src/assets/docs/architecture/JobSubmission/result_acquisition.png differ
diff --git a/src/assets/docs/architecture/JobSubmission/submission.png b/src/assets/docs/architecture/JobSubmission/submission.png
index f6bd9a9..87260c8 100644
Binary files a/src/assets/docs/architecture/JobSubmission/submission.png and b/src/assets/docs/architecture/JobSubmission/submission.png differ
diff --git a/src/assets/docs/architecture/Linkis0.X_newengine_architecture.png b/src/assets/docs/architecture/Linkis0.X_newengine_architecture.png
index 57c83b3..9605613 100644
Binary files a/src/assets/docs/architecture/Linkis0.X_newengine_architecture.png and b/src/assets/docs/architecture/Linkis0.X_newengine_architecture.png differ
diff --git a/src/assets/docs/architecture/Linkis0.X_services_list.png b/src/assets/docs/architecture/Linkis0.X_services_list.png
index c669abf..fc013e3 100644
Binary files a/src/assets/docs/architecture/Linkis0.X_services_list.png and b/src/assets/docs/architecture/Linkis0.X_services_list.png differ
diff --git a/src/assets/docs/architecture/Linkis1.0_architecture.png b/src/assets/docs/architecture/Linkis1.0_architecture.png
index 825672b..497e8fe 100644
Binary files a/src/assets/docs/architecture/Linkis1.0_architecture.png and b/src/assets/docs/architecture/Linkis1.0_architecture.png differ
diff --git a/src/assets/docs/architecture/Linkis1.0_engineconn_architecture.png b/src/assets/docs/architecture/Linkis1.0_engineconn_architecture.png
index d95da89..1e394fd 100644
Binary files a/src/assets/docs/architecture/Linkis1.0_engineconn_architecture.png and b/src/assets/docs/architecture/Linkis1.0_engineconn_architecture.png differ
diff --git a/src/assets/docs/architecture/Linkis1.0_newengine_architecture.png b/src/assets/docs/architecture/Linkis1.0_newengine_architecture.png
index b1d60bf..3e06513 100644
Binary files a/src/assets/docs/architecture/Linkis1.0_newengine_architecture.png and b/src/assets/docs/architecture/Linkis1.0_newengine_architecture.png differ
diff --git a/src/assets/docs/architecture/Linkis1.0_newengine_initialization.png b/src/assets/docs/architecture/Linkis1.0_newengine_initialization.png
index 003b38e..e2abe9b 100644
Binary files a/src/assets/docs/architecture/Linkis1.0_newengine_initialization.png and b/src/assets/docs/architecture/Linkis1.0_newengine_initialization.png differ
diff --git a/src/assets/docs/architecture/Linkis1.0_services_list.png b/src/assets/docs/architecture/Linkis1.0_services_list.png
index f768545..347b904 100644
Binary files a/src/assets/docs/architecture/Linkis1.0_services_list.png and b/src/assets/docs/architecture/Linkis1.0_services_list.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png
index f61c49a..22e0071 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png
index a2e1022..b47f337 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png
index 5f4272f..1331f47 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png
index 9bb177a..482b185 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png
index 00d1f4a..4780029 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png
index 439c8e2..37de4c7 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png
index 081d514..ede68d3 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png
index e343579..2f52383 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png
index 012eb65..414ac14 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png
index c3a43b9..248b999 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png
index 719599a..919ab79 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png
index 2277a70..f127472 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png
index df58d96..8fe4604 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png
index 1e13445..c9d061f 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png
index 7e410fb..7eb2a02 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png
index 097b7f1..c0e74f7 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png
index 7a4d462..673ba1e 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png
index fdd6623..98a8912 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png
index b366462..c1cb3c0 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png
index 2a1e403..4abc393 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png
index 32336eb..69ede57 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png
index fdb60fc..4c27442 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png
index 45dcc43..b9a150a 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png
index 2175704..c53f347 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png
index 9d357af..4340ebb 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png
index b08efd3..925b20d 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png
index 13ca37e..0caafcd 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png
index 36a4d96..c5c7389 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png differ
diff --git a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png
index 0a5ae1d..309a266 100644
Binary files a/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png and b/src/assets/docs/architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png differ
diff --git a/src/assets/docs/architecture/linkis_engineconnplugin_01.png b/src/assets/docs/architecture/linkis_engineconnplugin_01.png
index 2d2d134..91ce86e 100644
Binary files a/src/assets/docs/architecture/linkis_engineconnplugin_01.png and b/src/assets/docs/architecture/linkis_engineconnplugin_01.png differ
diff --git a/src/assets/docs/architecture/linkis_intro_01.png b/src/assets/docs/architecture/linkis_intro_01.png
index 60b575d..7bdaf4a 100644
Binary files a/src/assets/docs/architecture/linkis_intro_01.png and b/src/assets/docs/architecture/linkis_intro_01.png differ
diff --git a/src/assets/docs/architecture/linkis_intro_02.png b/src/assets/docs/architecture/linkis_intro_02.png
index a31e681..97fe84b 100644
Binary files a/src/assets/docs/architecture/linkis_intro_02.png and b/src/assets/docs/architecture/linkis_intro_02.png differ
diff --git a/src/assets/docs/architecture/linkis_microservice_gov_01.png b/src/assets/docs/architecture/linkis_microservice_gov_01.png
index ac46424..0287117 100644
Binary files a/src/assets/docs/architecture/linkis_microservice_gov_01.png and b/src/assets/docs/architecture/linkis_microservice_gov_01.png differ
diff --git a/src/assets/docs/architecture/linkis_microservice_gov_03.png b/src/assets/docs/architecture/linkis_microservice_gov_03.png
index b53c8e1..8a2763f 100644
Binary files a/src/assets/docs/architecture/linkis_microservice_gov_03.png and b/src/assets/docs/architecture/linkis_microservice_gov_03.png differ
diff --git a/src/assets/docs/architecture/linkis_publicservice_01.png b/src/assets/docs/architecture/linkis_publicservice_01.png
index d503573..befd7de 100644
Binary files a/src/assets/docs/architecture/linkis_publicservice_01.png and b/src/assets/docs/architecture/linkis_publicservice_01.png differ
diff --git a/src/assets/docs/architecture/publicenhencement_architecture.png b/src/assets/docs/architecture/publicenhencement_architecture.png
index bcf72a5..35f4a6c 100644
Binary files a/src/assets/docs/architecture/publicenhencement_architecture.png and b/src/assets/docs/architecture/publicenhencement_architecture.png differ
diff --git a/src/assets/docs/deploy/Linkis1.0_combined_eureka.png b/src/assets/docs/deploy/Linkis1.0_combined_eureka.png
index 809dbee..d8fc7bc 100644
Binary files a/src/assets/docs/deploy/Linkis1.0_combined_eureka.png and b/src/assets/docs/deploy/Linkis1.0_combined_eureka.png differ
diff --git a/src/assets/docs/deploy/distributed_deployment.png b/src/assets/docs/deploy/distributed_deployment.png
index 8cd86c5..f9eecc0 100644
Binary files a/src/assets/docs/deploy/distributed_deployment.png and b/src/assets/docs/deploy/distributed_deployment.png differ
diff --git a/src/assets/docs/deployment/Linkis1.0_combined_eureka.png b/src/assets/docs/deployment/Linkis1.0_combined_eureka.png
index 809dbee..d8fc7bc 100644
Binary files a/src/assets/docs/deployment/Linkis1.0_combined_eureka.png and b/src/assets/docs/deployment/Linkis1.0_combined_eureka.png differ
diff --git a/src/assets/docs/manual/ECM_all_engine_information.png b/src/assets/docs/manual/ECM_all_engine_information.png
index a182e84..3516627 100644
Binary files a/src/assets/docs/manual/ECM_all_engine_information.png and b/src/assets/docs/manual/ECM_all_engine_information.png differ
diff --git a/src/assets/docs/manual/ECM_editing_interface.png b/src/assets/docs/manual/ECM_editing_interface.png
index e611e3e..dd10e29 100644
Binary files a/src/assets/docs/manual/ECM_editing_interface.png and b/src/assets/docs/manual/ECM_editing_interface.png differ
diff --git a/src/assets/docs/manual/ECM_management_interface.png b/src/assets/docs/manual/ECM_management_interface.png
index 4764732..9bb1b52 100644
Binary files a/src/assets/docs/manual/ECM_management_interface.png and b/src/assets/docs/manual/ECM_management_interface.png differ
diff --git a/src/assets/docs/manual/administrator_view.png b/src/assets/docs/manual/administrator_view.png
index f5b7041..777f001 100644
Binary files a/src/assets/docs/manual/administrator_view.png and b/src/assets/docs/manual/administrator_view.png differ
diff --git a/src/assets/docs/manual/click_the_instance_name_to_view_engine_information.png b/src/assets/docs/manual/click_the_instance_name_to_view_engine_information.png
index 2ecd27c..d9fbff2 100644
Binary files a/src/assets/docs/manual/click_the_instance_name_to_view_engine_information.png and b/src/assets/docs/manual/click_the_instance_name_to_view_engine_information.png differ
diff --git a/src/assets/docs/manual/edit_directory.png b/src/assets/docs/manual/edit_directory.png
index 7a30e3e..b52f1d9 100644
Binary files a/src/assets/docs/manual/edit_directory.png and b/src/assets/docs/manual/edit_directory.png differ
diff --git a/src/assets/docs/manual/eureka_registration_center.png b/src/assets/docs/manual/eureka_registration_center.png
index 9585c20..4636052 100644
Binary files a/src/assets/docs/manual/eureka_registration_center.png and b/src/assets/docs/manual/eureka_registration_center.png differ
diff --git a/src/assets/docs/manual/global_history_interface.png b/src/assets/docs/manual/global_history_interface.png
index 59eee9b..593eb93 100644
Binary files a/src/assets/docs/manual/global_history_interface.png and b/src/assets/docs/manual/global_history_interface.png differ
diff --git a/src/assets/docs/manual/global_history_query_button.png b/src/assets/docs/manual/global_history_query_button.png
index eec31de..da82bce 100644
Binary files a/src/assets/docs/manual/global_history_query_button.png and b/src/assets/docs/manual/global_history_query_button.png differ
diff --git a/src/assets/docs/manual/global_variable_interface.png b/src/assets/docs/manual/global_variable_interface.png
index 89b1cf2..e651dcf 100644
Binary files a/src/assets/docs/manual/global_variable_interface.png and b/src/assets/docs/manual/global_variable_interface.png differ
diff --git a/src/assets/docs/manual/microservice_management_interface.png b/src/assets/docs/manual/microservice_management_interface.png
index 593edb4..9007f6b 100644
Binary files a/src/assets/docs/manual/microservice_management_interface.png and b/src/assets/docs/manual/microservice_management_interface.png differ
diff --git a/src/assets/docs/manual/new_application_type.png b/src/assets/docs/manual/new_application_type.png
index f260c3d..d0a9c39 100644
Binary files a/src/assets/docs/manual/new_application_type.png and b/src/assets/docs/manual/new_application_type.png differ
diff --git a/src/assets/docs/manual/parameter_configuration_interface.png b/src/assets/docs/manual/parameter_configuration_interface.png
index deadf64..26881ff 100644
Binary files a/src/assets/docs/manual/parameter_configuration_interface.png and b/src/assets/docs/manual/parameter_configuration_interface.png differ
diff --git a/src/assets/docs/manual/queue_set.png b/src/assets/docs/manual/queue_set.png
index 735a670..b7aa53f 100644
Binary files a/src/assets/docs/manual/queue_set.png and b/src/assets/docs/manual/queue_set.png differ
diff --git a/src/assets/docs/manual/resource_management_interface.png b/src/assets/docs/manual/resource_management_interface.png
index 918bd08..d2c95b9 100644
Binary files a/src/assets/docs/manual/resource_management_interface.png and b/src/assets/docs/manual/resource_management_interface.png differ
diff --git a/src/assets/docs/manual/sparksql_run.png b/src/assets/docs/manual/sparksql_run.png
index f0b1d1b..41f8ff3 100644
Binary files a/src/assets/docs/manual/sparksql_run.png and b/src/assets/docs/manual/sparksql_run.png differ
diff --git a/src/assets/docs/manual/task_execution_log_of_a_single_task.png b/src/assets/docs/manual/task_execution_log_of_a_single_task.png
index ff0ed86..ce19bf0 100644
Binary files a/src/assets/docs/manual/task_execution_log_of_a_single_task.png and b/src/assets/docs/manual/task_execution_log_of_a_single_task.png differ
diff --git a/src/assets/docs/manual/workflow.png b/src/assets/docs/manual/workflow.png
index 3a5919f..cdfce19 100644
Binary files a/src/assets/docs/manual/workflow.png and b/src/assets/docs/manual/workflow.png differ
diff --git a/src/assets/fqa/Q&A.png b/src/assets/fqa/Q&A.png
index 121d7f3..7ea8fb8 100644
Binary files a/src/assets/fqa/Q&A.png and b/src/assets/fqa/Q&A.png differ
diff --git a/src/assets/fqa/code-fix-01.png b/src/assets/fqa/code-fix-01.png
index 27bdddb..394ef17 100644
Binary files a/src/assets/fqa/code-fix-01.png and b/src/assets/fqa/code-fix-01.png differ
diff --git a/src/assets/fqa/db-config-01.png b/src/assets/fqa/db-config-01.png
index fa1f1c8..37fbc8c 100644
Binary files a/src/assets/fqa/db-config-01.png and b/src/assets/fqa/db-config-01.png differ
diff --git a/src/assets/fqa/db-config-02.png b/src/assets/fqa/db-config-02.png
index c2f8443..93dd5c4 100644
Binary files a/src/assets/fqa/db-config-02.png and b/src/assets/fqa/db-config-02.png differ
diff --git a/src/assets/fqa/debug-01.png b/src/assets/fqa/debug-01.png
index 9834b3d..beba6ae 100644
Binary files a/src/assets/fqa/debug-01.png and b/src/assets/fqa/debug-01.png differ
diff --git a/src/assets/fqa/debug-02.png b/src/assets/fqa/debug-02.png
index c7621b5..48feb77 100644
Binary files a/src/assets/fqa/debug-02.png and b/src/assets/fqa/debug-02.png differ
diff --git a/src/assets/fqa/debug-03.png b/src/assets/fqa/debug-03.png
index 16788c3..fb98ad3 100644
Binary files a/src/assets/fqa/debug-03.png and b/src/assets/fqa/debug-03.png differ
diff --git a/src/assets/fqa/debug-04.png b/src/assets/fqa/debug-04.png
index cb944ee..247e871 100644
Binary files a/src/assets/fqa/debug-04.png and b/src/assets/fqa/debug-04.png differ
diff --git a/src/assets/fqa/debug-05.png b/src/assets/fqa/debug-05.png
index 2c5972c..890d522 100644
Binary files a/src/assets/fqa/debug-05.png and b/src/assets/fqa/debug-05.png differ
diff --git a/src/assets/fqa/debug-06.png b/src/assets/fqa/debug-06.png
index a64cec6..8480246 100644
Binary files a/src/assets/fqa/debug-06.png and b/src/assets/fqa/debug-06.png differ
diff --git a/src/assets/fqa/debug-07.png b/src/assets/fqa/debug-07.png
index 935d5bc..2d020a8 100644
Binary files a/src/assets/fqa/debug-07.png and b/src/assets/fqa/debug-07.png differ
diff --git a/src/assets/fqa/debug-08.png b/src/assets/fqa/debug-08.png
index d2a3328..a7bfd75 100644
Binary files a/src/assets/fqa/debug-08.png and b/src/assets/fqa/debug-08.png differ
diff --git a/src/assets/fqa/hive-config-01.png b/src/assets/fqa/hive-config-01.png
index 6bd0edb..56d833f 100644
Binary files a/src/assets/fqa/hive-config-01.png and b/src/assets/fqa/hive-config-01.png differ
diff --git a/src/assets/fqa/linkis-exception-01.png b/src/assets/fqa/linkis-exception-01.png
index 01090d1..86e22ed 100644
Binary files a/src/assets/fqa/linkis-exception-01.png and b/src/assets/fqa/linkis-exception-01.png differ
diff --git a/src/assets/fqa/linkis-exception-02.png b/src/assets/fqa/linkis-exception-02.png
index 0f68f12..d0ec7de 100644
Binary files a/src/assets/fqa/linkis-exception-02.png and b/src/assets/fqa/linkis-exception-02.png differ
diff --git a/src/assets/fqa/linkis-exception-03.png b/src/assets/fqa/linkis-exception-03.png
index 8fb4464..f85c17c 100644
Binary files a/src/assets/fqa/linkis-exception-03.png and b/src/assets/fqa/linkis-exception-03.png differ
diff --git a/src/assets/fqa/linkis-exception-04.png b/src/assets/fqa/linkis-exception-04.png
index 5635a20..975936c 100644
Binary files a/src/assets/fqa/linkis-exception-04.png and b/src/assets/fqa/linkis-exception-04.png differ
diff --git a/src/assets/fqa/linkis-exception-05.png b/src/assets/fqa/linkis-exception-05.png
index c341a9d..bb19a10 100644
Binary files a/src/assets/fqa/linkis-exception-05.png and b/src/assets/fqa/linkis-exception-05.png differ
diff --git a/src/assets/fqa/linkis-exception-06.png b/src/assets/fqa/linkis-exception-06.png
index b0624ef..1b3bc0c 100644
Binary files a/src/assets/fqa/linkis-exception-06.png and b/src/assets/fqa/linkis-exception-06.png differ
diff --git a/src/assets/fqa/linkis-exception-07.png b/src/assets/fqa/linkis-exception-07.png
index 402f0c9..7070118 100644
Binary files a/src/assets/fqa/linkis-exception-07.png and b/src/assets/fqa/linkis-exception-07.png differ
diff --git a/src/assets/fqa/linkis-exception-08.png b/src/assets/fqa/linkis-exception-08.png
index 27c1824..2f8d5ec 100644
Binary files a/src/assets/fqa/linkis-exception-08.png and b/src/assets/fqa/linkis-exception-08.png differ
diff --git a/src/assets/fqa/linkis-exception-09.png b/src/assets/fqa/linkis-exception-09.png
index 5b27b4b..11f1e24 100644
Binary files a/src/assets/fqa/linkis-exception-09.png and b/src/assets/fqa/linkis-exception-09.png differ
diff --git a/src/assets/fqa/linkis-exception-10.png b/src/assets/fqa/linkis-exception-10.png
index 7c361e7..fe8e4ec 100644
Binary files a/src/assets/fqa/linkis-exception-10.png and b/src/assets/fqa/linkis-exception-10.png differ
diff --git a/src/assets/fqa/page-show-01.png b/src/assets/fqa/page-show-01.png
index d953cb6..22c3a3e 100644
Binary files a/src/assets/fqa/page-show-01.png and b/src/assets/fqa/page-show-01.png differ
diff --git a/src/assets/fqa/page-show-02.png b/src/assets/fqa/page-show-02.png
index af273bb..5330dfe 100644
Binary files a/src/assets/fqa/page-show-02.png and b/src/assets/fqa/page-show-02.png differ
diff --git a/src/assets/fqa/page-show-03.png b/src/assets/fqa/page-show-03.png
index c36bb30..7d6e11d 100644
Binary files a/src/assets/fqa/page-show-03.png and b/src/assets/fqa/page-show-03.png differ
diff --git a/src/assets/fqa/searching_keywords.png b/src/assets/fqa/searching_keywords.png
index cada716..f578266 100644
Binary files a/src/assets/fqa/searching_keywords.png and b/src/assets/fqa/searching_keywords.png differ
diff --git a/src/assets/fqa/shell-error-01.png b/src/assets/fqa/shell-error-01.png
index 910150e..bbc402f 100644
Binary files a/src/assets/fqa/shell-error-01.png and b/src/assets/fqa/shell-error-01.png differ
diff --git a/src/assets/fqa/shell-error-02.png b/src/assets/fqa/shell-error-02.png
index 71d5e7e..379f016 100644
Binary files a/src/assets/fqa/shell-error-02.png and b/src/assets/fqa/shell-error-02.png differ
diff --git a/src/assets/fqa/shell-error-03.png b/src/assets/fqa/shell-error-03.png
index 4bb9cfe..82fd1d6 100644
Binary files a/src/assets/fqa/shell-error-03.png and b/src/assets/fqa/shell-error-03.png differ
diff --git a/src/assets/fqa/shell-error-04.png b/src/assets/fqa/shell-error-04.png
index c2df857..d5b21b7 100644
Binary files a/src/assets/fqa/shell-error-04.png and b/src/assets/fqa/shell-error-04.png differ
diff --git a/src/assets/fqa/shell-error-05.png b/src/assets/fqa/shell-error-05.png
index 3635584..59b82b6 100644
Binary files a/src/assets/fqa/shell-error-05.png and b/src/assets/fqa/shell-error-05.png differ
diff --git a/src/assets/home/after_linkis_en.png b/src/assets/home/after_linkis_en.png
index 6ff635d..1daacf8 100644
Binary files a/src/assets/home/after_linkis_en.png and b/src/assets/home/after_linkis_en.png differ
diff --git a/src/assets/home/after_linkis_zh.png b/src/assets/home/after_linkis_zh.png
index b94beab..b0dee26 100644
Binary files a/src/assets/home/after_linkis_zh.png and b/src/assets/home/after_linkis_zh.png differ
diff --git a/src/assets/home/before_linkis_en.png b/src/assets/home/before_linkis_en.png
index a2c40e1..7bdaf4a 100644
Binary files a/src/assets/home/before_linkis_en.png and b/src/assets/home/before_linkis_en.png differ
diff --git a/src/assets/home/before_linkis_zh.png b/src/assets/home/before_linkis_zh.png
index 914d38b..57400a8 100644
Binary files a/src/assets/home/before_linkis_zh.png and b/src/assets/home/before_linkis_zh.png differ
diff --git a/src/assets/home/description.png b/src/assets/home/description.png
index 8ea2b4c..f86c34b 100644
Binary files a/src/assets/home/description.png and b/src/assets/home/description.png differ

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 09/50: ADD: 增加首页的home-description模块

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit e1cfcdaa3eaf1b3abda8a162d9b524e732aece66
Author: lucaszhu <lu...@webank.com>
AuthorDate: Thu Sep 30 15:06:07 2021 +0800

    ADD: 增加首页的home-description模块
---
 src/pages/home.vue | 37 ++++++++++++++++++++++++++++---------
 1 file changed, 28 insertions(+), 9 deletions(-)

diff --git a/src/pages/home.vue b/src/pages/home.vue
index 8fb9032..7fb2bdf 100644
--- a/src/pages/home.vue
+++ b/src/pages/home.vue
@@ -12,15 +12,23 @@
     <div class="concept home-block">
       <div class="concept-item">
         <h3 class="concept-title">Before</h3>
-        <p class="concept-desc">Each upper application directly connects to and accesses various underlying engines in a tightly coupled way, which makes big data platform a complex network architecture.</p>
+        <p class="home-paragraph">Each upper application directly connects to and accesses various underlying engines in a tightly coupled way, which makes big data platform a complex network architecture.</p>
         <!-- <img src="" alt="before" class="concept-image"> -->
       </div>
       <div class="concept-item">
         <h3 class="concept-title">After</h3>
-        <p class="concept-desc">Build a common layer of "computation middleware" between the numerous upper-layer applications and the countless underlying engines to resolve these complex connection problems in a standardized reusable way</p>
+        <p class="home-paragraph">Build a common layer of "computation middleware" between the numerous upper-layer applications and the countless underlying engines to resolve these complex connection problems in a standardized reusable way</p>
         <!-- <img src="" alt="after" class="concept-image"> -->
       </div>
     </div>
+    <div class="description home-block">
+      <div class="description-content">
+        <h1 class="home-block-title">Description</h1>
+        <p class="home-paragraph">Linkis provides standardized interfaces (REST, JDBC, WebSocket etc.) to easily connect to various underlying engines (Spark, Presto, Flink, etc.), and acts as a proxy between the upper applications layer and underlying engines layer. </p>
+        <p class="home-paragraph">Linkis is able to facilitate the connectivity, governance and orchestration capabilities of different kind of engines like OLAP, OLTP (developing), Streaming, and handle all these "computation governance" affairs in a standardized reusable way.</p>
+      </div>
+      <!-- <img src="" alt="description" class="description-image"> -->
+    </div>
     <h1 class="home-block-title text-center">Core Features</h1>
     <div class="features home-block">
       <div class="feature-item">
@@ -78,7 +86,25 @@
     }
     .home-block{
       padding: 20px 0 88px;
+      .home-paragraph{
+        font-size: 18px;
+        color: #4A4A4A;
+        line-height: 26px;
+        font-weight: 400;
+        margin-bottom: 16px;
+      }
+    }
+    .description{
+      display: flex;
+      align-items: center;
+      .description-content{
+        flex: 1;
+      }
+      .description-image{
+        margin-left: 40px;
+      }
     }
+
     .concept{
       display: grid;
       grid-template-columns: repeat(2, 1fr);
@@ -93,13 +119,6 @@
           margin-bottom: 16px;
           color: @enhance-color;
         }
-        .concept-desc{
-          font-size: 18px;
-          color: #4A4A4A;
-          line-height: 26px;
-          font-weight: 400;
-          margin-bottom: 16px;
-        }
         .concept-image{
           width: 100%;
         }

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 30/50: FIX DETAIL

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 78bd36ebb3092208f67e1d1c8f02e5b5beea6404
Author: lucaszhu <lu...@webank.com>
AuthorDate: Mon Oct 18 10:54:47 2021 +0800

    FIX DETAIL
---
 src/App.vue              |  7 +++++++
 src/pages/home/index.vue |  4 +++-
 src/pages/team/team.vue  | 12 ++++--------
 3 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/src/App.vue b/src/App.vue
index f777ea6..36766e8 100644
--- a/src/App.vue
+++ b/src/App.vue
@@ -74,6 +74,7 @@
                         <a href="http://www.apache.org/foundation/thanks.html" class="links-item">{{$t('menu.links.thanks')}}</a>
                     </div>
                 </div>
+                <img src="./assets/image/incubator-logo.png" alt="incubator-logo" class="incubator-logo">
                 <p class="footer-desc">Apache Linkis (Incubating) is an effort undergoing incubation at The Apache
                     Software Foundation, sponsored by the Apache Incubator. Incubation is required of all newly accepted
                     projects until a further review indicates that the infrastructure, communications, and decision
@@ -202,6 +203,12 @@
         padding-top: 40px;
         background: #F9FAFB;
 
+        .incubator-logo {
+            height: 44px;
+            margin-left: 20px;
+            margin-bottom: 20px;
+        }
+
         .footer-desc {
             padding: 0 20px 30px;
             color: #999999;
diff --git a/src/pages/home/index.vue b/src/pages/home/index.vue
index 07205ba..a619193 100644
--- a/src/pages/home/index.vue
+++ b/src/pages/home/index.vue
@@ -145,11 +145,13 @@
       grid-row-gap: 20px;
       grid-column-gap: 20px;
       .case-item{
+        display: flex;
+        min-width: 0;
         height: 88px;
-        width:167px;
         background: #FFFFFF;
         box-shadow: 0 1px 20px 0 rgba(15,18,34,0.10);
         border-radius: 8px;
+        align-content: center
       }
     }
     .features{
diff --git a/src/pages/team/team.vue b/src/pages/team/team.vue
index 1dd51a1..eb53843 100644
--- a/src/pages/team/team.vue
+++ b/src/pages/team/team.vue
@@ -1,20 +1,16 @@
 <template>
   <div class="ctn-block team-page">
-    <p>
-      {{jsonData.info.desc}}
-    </p>
-
     <h3 class="team-title">PMC</h3>
-<!--    <p class="team-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in Github release page.</p>-->
+    <p class="team-desc">{{jsonData.info.desc}}</p>
     <ul  class="character-list">
-      <li v-for="item in jsonData.list" class="character-item text-center">
+      <li v-for="(item,index) in jsonData.list" :key="index" class="character-item text-center">
         <img class="character-avatar" :src="item.avatarUrl" :alt="item.name"/>
         <div class="character-desc">
-          <h3 class="character-name"><a href="{{utils.concatStr('https://github.com/','',item.githubId)}}" class="character-name">{{item.name}}</a></h3>
+          <h3 class="character-name"><a :href="'https://github.com/'+ item.githubId" class="character-name" target="_blank">{{item.name}}</a></h3>
         </div>
       </li>
     </ul>
-    <p v-html="jsonData.info.tip"></p>
+    <p class="team-desc" v-html="jsonData.info.tip"></p>
     <!--   <h3 class="team-title">Contributors</h3>
      <p class="team-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in Github release page.</p>
     ]<ul class="contributor-list">

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 44/50: Update .asf.yaml

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit c59e3c170eca2b6d2ea4f7ed7391ea1635c2e6da
Author: johnnywang <wp...@gmail.com>
AuthorDate: Tue Oct 26 19:13:36 2021 +0800

    Update .asf.yaml
    
    update for asf-site
---
 .asf.yaml | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/.asf.yaml b/.asf.yaml
index 9301abb..1656f9a 100644
--- a/.asf.yaml
+++ b/.asf.yaml
@@ -17,7 +17,7 @@
 
 github:
   description: Apache Linkis documents
-  homepage: https://linkis.staged.apache.org/
+  homepage: https://linkis.apache.org/
   labels:
     - linkis
     - website
@@ -25,4 +25,8 @@ github:
 # If this branch is asf-staging, it will be published to https://linkis.staged.apache.org/
 staging:
   profile: ~
-  whoami:  asf-staging
\ No newline at end of file
+  whoami:  asf-staging
+  
+# asf-site branch will show up at https://linkis.apache.org
+publish:
+  whoami:  asf-site 

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 21/50: add i18n for home

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 85f10f80cdf701037029484ebad1dd87c8cb11e9
Author: casionone <ca...@gmail.com>
AuthorDate: Tue Oct 12 20:24:52 2021 +0800

    add i18n for home
---
 src/App.vue                          |  34 ++++++-------
 src/assets/home/after_linkis_en.png  | Bin 0 -> 481170 bytes
 src/assets/home/after_linkis_zh.png  | Bin 0 -> 645519 bytes
 src/assets/home/before_linkis_en.png | Bin 0 -> 508718 bytes
 src/assets/home/before_linkis_zh.png | Bin 0 -> 332201 bytes
 src/assets/home/description.png      | Bin 0 -> 73910 bytes
 src/i18n/en.json                     |  26 +++++++++-
 src/i18n/zh.json                     |  25 +++++++++-
 src/pages/home.vue                   |  94 ++++++++++++++++++++++++++++++++---
 9 files changed, 154 insertions(+), 25 deletions(-)

diff --git a/src/App.vue b/src/App.vue
index 5a47fdd..6418c15 100644
--- a/src/App.vue
+++ b/src/App.vue
@@ -22,12 +22,12 @@ const switchLang = (lang) => {
       </div>
       <span class="nav-logo-badge">Incubating</span>
       <div class="menu-list">
-        <router-link class="menu-item" to="/"><span class="label">Home</span></router-link>
-        <router-link class="menu-item" to="/docs/deploy/linkis"><span class="label">Docs</span></router-link>
-        <router-link class="menu-item" to="/faq/index"><span class="label">FAQ</span></router-link>
-        <router-link class="menu-item" to="/download"><span class="label">Download</span></router-link>
-        <router-link class="menu-item" to="/blog"><span class="label">Blog</span></router-link>
-        <router-link class="menu-item" to="/team"><span class="label">Team</span></router-link>
+        <router-link class="menu-item" to="/"><span class="label">{{$t('menu.item.home')}}</span></router-link>
+        <router-link class="menu-item" to="/docs/deploy/linkis"><span class="label">{{$t('menu.item.docs')}}</span></router-link>
+        <router-link class="menu-item" to="/faq/index"><span class="label">{{$t('menu.item.faq')}}</span></router-link>
+        <router-link class="menu-item" to="/download"><span class="label">{{$t('menu.item.download')}}</span></router-link>
+        <router-link class="menu-item" to="/blog"><span class="label">{{$t('menu.item.blog')}}</span></router-link>
+        <router-link class="menu-item" to="/team"><span class="label">{{$t('menu.item.team')}}</span></router-link>
         <div class="menu-item language">
           Language
           <div class="dropdown-menu">
@@ -46,22 +46,22 @@ const switchLang = (lang) => {
       <div class="footer-links-row">
         <div class="footer-links">
           <h3 class="links-title">Linkis</h3>
-          <a href="" class="links-item">Documentation</a>
-          <a href="" class="links-item">Events</a>
-          <a href="" class="links-item">Releases</a>
+          <a href="" class="links-item">{{$t('menu.links.documentation')}}</a>
+          <a href="" class="links-item">{{$t('menu.links.events')}}</a>
+          <a href="" class="links-item">{{$t('menu.links.releases')}}</a>
         </div>
         <div class="footer-links">
-          <h3 class="links-title">Community</h3>
+          <h3 class="links-title">{{$t('menu.links.community')}}</h3>
           <a href="" class="links-item">GitHub</a>
-          <a href="" class="links-item">Issue Tracker</a>
-          <a href="" class="links-item">Pull Requests</a>
+          <a href="" class="links-item">{{$t('menu.links.issue_tracker')}}</a>
+          <a href="" class="links-item">{{$t('menu.links.pull_requests')}}</a>
         </div>
         <div class="footer-links">
-          <h3 class="links-title">Apache Software Foundation</h3>
-          <a href="" class="links-item">Foundation</a>
-          <a href="" class="links-item">License</a>
-          <a href="" class="links-item">Sponsorship</a>
-          <a href="" class="links-item">Thanks</a>
+          <h3 class="links-title">{{$t('menu.links.asf')}}</h3>
+          <a href="" class="links-item">{{$t('menu.links.foundation')}}</a>
+          <a href="" class="links-item">{{$t('menu.links.license')}}</a>
+          <a href="" class="links-item">{{$t('menu.links.sponsorship')}}</a>
+          <a href="" class="links-item">{{$t('menu.links.thanks')}}</a>
         </div>
       </div>
       <p class="footer-desc">Apache Linkis (Incubating) is an effort undergoing incubation at The Apache Software Foundation, sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code [...]
diff --git a/src/assets/home/after_linkis_en.png b/src/assets/home/after_linkis_en.png
new file mode 100644
index 0000000..6ff635d
Binary files /dev/null and b/src/assets/home/after_linkis_en.png differ
diff --git a/src/assets/home/after_linkis_zh.png b/src/assets/home/after_linkis_zh.png
new file mode 100644
index 0000000..b94beab
Binary files /dev/null and b/src/assets/home/after_linkis_zh.png differ
diff --git a/src/assets/home/before_linkis_en.png b/src/assets/home/before_linkis_en.png
new file mode 100644
index 0000000..a2c40e1
Binary files /dev/null and b/src/assets/home/before_linkis_en.png differ
diff --git a/src/assets/home/before_linkis_zh.png b/src/assets/home/before_linkis_zh.png
new file mode 100644
index 0000000..914d38b
Binary files /dev/null and b/src/assets/home/before_linkis_zh.png differ
diff --git a/src/assets/home/description.png b/src/assets/home/description.png
new file mode 100644
index 0000000..8ea2b4c
Binary files /dev/null and b/src/assets/home/description.png differ
diff --git a/src/i18n/en.json b/src/i18n/en.json
index 83add9c..479851b 100644
--- a/src/i18n/en.json
+++ b/src/i18n/en.json
@@ -6,5 +6,29 @@
         "slogan": "Decouple the upper applications and the underlying data engines by building a computation middleware layer."
       }
     }
+  },
+  "menu": {
+    "item":{
+      "home": "Home",
+      "docs":"Docs",
+      "faq": "FAQ",
+      "download":"Download",
+      "blog":"Blog",
+      "team":"Team"
+    },
+    "links":{
+      "documentation":"Documentation",
+      "events":"Events",
+      "releases":"Releases",
+      "community":"Community",
+      "issue_tracker":"Issue Tracker",
+      "pull_requests":"Pull Requests",
+      "asf":"Apache Software Foundation",
+      "foundation":"Foundation",
+      "license":"License",
+      "sponsorship":"Sponsorship",
+      "thanks":"Thanks"
+    }
   }
-}
\ No newline at end of file
+}
+
diff --git a/src/i18n/zh.json b/src/i18n/zh.json
index 844b8b7..178f663 100644
--- a/src/i18n/zh.json
+++ b/src/i18n/zh.json
@@ -3,8 +3,31 @@
     "common": {},
     "home": {
       "banner": {
-        "slogan": "中文的Decouple the upper applications and the underlying data engines by building a computation middleware layer."
+        "slogan": "通过构建计算中间件层来解耦上层应用程序和底层数据引擎"
       }
     }
+  },
+  "menu": {
+    "item":{
+      "home": "首页",
+      "docs":"文档",
+      "faq": "FAQ",
+      "download":"下载",
+      "blog":"博客",
+      "team":"团队"
+    },
+    "links":{
+      "documentation":"文档",
+      "events":"动态",
+      "releases":"版本",
+      "community":"社区",
+      "issue_tracker":"Issue追踪",
+      "pull_requests":"Pull Request",
+      "asf":"ASF",
+      "foundation":"基金会",
+      "license":"证书",
+      "sponsorship":"赞助",
+      "thanks":"致谢"
+    }
   }
 }
\ No newline at end of file
diff --git a/src/pages/home.vue b/src/pages/home.vue
index 311f685..91f2f46 100644
--- a/src/pages/home.vue
+++ b/src/pages/home.vue
@@ -1,5 +1,5 @@
 <template>
-  <div class="ctn-block home-page">
+  <div v-if="lang === 'en'" class="ctn-block home-page">
     <div class="banner text-center">
       <h1 class="home-title"><span class="apache">Apache</span> <span class="linkis">Linkis</span> <span class="badge">Incubating</span></h1>
       <p class="home-desc">{{$t('message.home.banner.slogan')}}</p>
@@ -13,12 +13,12 @@
       <div class="concept-item">
         <h3 class="concept-title">Before</h3>
         <p class="home-paragraph">Each upper application directly connects to and accesses various underlying engines in a tightly coupled way, which makes big data platform a complex network architecture.</p>
-        <!-- <img src="" alt="before" class="concept-image"> -->
+        <img src="../assets/home/before_linkis_en.png" alt="before" class="concept-image">
       </div>
       <div class="concept-item">
         <h3 class="concept-title">After</h3>
         <p class="home-paragraph">Build a common layer of "computation middleware" between the numerous upper-layer applications and the countless underlying engines to resolve these complex connection problems in a standardized reusable way</p>
-        <!-- <img src="" alt="after" class="concept-image"> -->
+        <img src="../assets/home/after_linkis_en.png" alt="before" class="concept-image">
       </div>
     </div>
     <div class="description home-block">
@@ -30,7 +30,7 @@
           <a href="/" class="corner-botton blue">Learn More</a>
         </div>
       </div>
-      <!-- <img src="" alt="description" class="description-image"> -->
+      <img src="../assets/home/description.png" alt="description" class="description-image">
     </div>
     <h1 class="home-block-title text-center">Core Features</h1>
     <div class="features home-block">
@@ -65,7 +65,84 @@
         </div>
       </div>
     </div>
-    <h1 class="home-block-title text-center">Showcase</h1>
+    <h1 class="home-block-title text-center">Our Users</h1>
+    <div class="show-case home-block">
+      <div class="case-item"></div>
+      <div class="case-item"></div>
+      <div class="case-item"></div>
+      <div class="case-item"></div>
+      <div class="case-item"></div>
+      <div class="case-item"></div>
+      <div class="case-item"></div>
+    </div>
+  </div>
+  <div v-else class="ctn-block home-page">
+    <div class="banner text-center">
+      <h1 class="home-title"><span class="apache">Apache</span> <span class="linkis">Linkis</span> <span class="badge">Incubating</span></h1>
+      <p class="home-desc">{{$t('message.home.banner.slogan')}}</p>
+      <div class="botton-row center">
+        <a href="/" class="corner-botton black">开始</a>
+        <a href="/" class="corner-botton white">GitHub</a>
+      </div>
+    </div>
+    <h1 class="home-block-title text-center">计算治理理念</h1>
+    <div class="concept home-block">
+      <div class="concept-item">
+        <h3 class="concept-title">没有Linkis之前</h3>
+        <p class="home-paragraph">每个上层应用以紧耦合的方式直接连接和访问各种底层引擎,这使得大数据平台成为一个复杂的网络架构</p>
+        <img src="../assets/home/before_linkis_zh.png" alt="before" class="concept-image">
+      </div>
+      <div class="concept-item">
+        <h3 class="concept-title">有Linkis之后</h3>
+        <p class="home-paragraph">在丰富的上层应用和丰富的底层引擎之间构建一个公共的“计算中间件”层,以标准化的可复用方式解决这些复杂的连接问题</p>
+       <img src="../assets/home/after_linkis_zh.png" alt="before" class="concept-image">
+      </div>
+    </div>
+    <div class="description home-block">
+      <div class="description-content">
+        <h1 class="home-block-title">描述</h1>
+        <p class="home-paragraph">Linkis 提供标准化接口(REST、JDBC、WebSocket 等),方便连接各种底层引擎(Spark、Presto、Flink 等),充当上层应用层和底层引擎层之间的代理</p>
+        <p class="home-paragraph">Linkis 能够促进 OLAP、OLTP(开发)、Streaming 等不同类型引擎的连接、治理和编排能力,并以标准化的可重用方式处理所有这些“计算治理”事务.</p>
+        <div class="botton-row">
+          <a href="/" class="corner-botton blue">了解更多</a>
+        </div>
+      </div>
+     <img src="../assets/home/description.png" alt="description" class="description-image">
+    </div>
+    <h1 class="home-block-title text-center">核心功能</h1>
+    <div class="features home-block">
+      <div class="feature-item">
+        <div class="item-content">
+          <h3 class="item-title">连通性</h3>
+          <p class="item-desc">简化操作环境;上层和下层解耦,使上层在底层变化时不敏感</p>
+        </div>
+      </div>
+      <div class="feature-item">
+        <div class="item-content">
+          <h3 class="item-title">可扩展性</h3>
+          <p class="item-desc">分布式微服务架构,具有很好的可伸缩性和扩展性;快速与新的底层引擎集成</p>
+        </div>
+      </div>
+      <div class="feature-item">
+        <div class="item-content">
+          <h3 class="item-title">可控性</h3>
+          <p class="item-desc">融合引擎入口,统一身份验证,高风险防控,审计记录;基于标签的多级精细化资源控制和恢复能力</p>
+        </div>
+      </div>
+      <div class="feature-item">
+        <div class="item-content">
+          <h3 class="item-title">编排</h3>
+          <p class="item-desc">基于双活、混合计算、事务Orchestrator服务的计算策略设计</p>
+        </div>
+      </div>
+      <div class="feature-item">
+        <div class="item-content">
+          <h3 class="item-title">可复用性</h3>
+          <p class="item-desc">大大减少了上层应用开发的后端开发工作量;可基于Linkis快速高效搭建数据平台工具套件</p>
+        </div>
+      </div>
+    </div>
+    <h1 class="home-block-title text-center">我们的用户</h1>
     <div class="show-case home-block">
       <div class="case-item"></div>
       <div class="case-item"></div>
@@ -232,4 +309,9 @@
       }
     }
   }
-</style>
\ No newline at end of file
+</style>
+<script setup>
+  import { ref } from "vue"
+  // 初始化语言
+  const lang = ref(localStorage.getItem('locale') || 'en');
+</script>

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 28/50: add team info and blogs

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 001a9546258d68c1dbb3b63ba970825811aeb2a0
Author: casionone <ca...@gmail.com>
AuthorDate: Thu Oct 14 20:08:16 2021 +0800

    add team info and blogs
---
 src/App.vue                        | 396 ++++++++++++++++++++-----------------
 src/assets/image/github_user.png   | Bin 0 -> 4677 bytes
 src/i18n/en.json                   |  66 +++++--
 src/i18n/zh.json                   |  34 +++-
 src/js/config.js                   |   9 +
 src/js/utils.js                    |  10 +
 src/pages/blog/AddEngineConn_en.md | 105 ++++++++++
 src/pages/blog/AddEngineConn_zh.md | 111 +++++++++++
 src/pages/blog/blogdata_en.js      |  13 ++
 src/pages/blog/blogdata_zh.js      |  13 ++
 src/pages/blog/event.vue           |  57 +++---
 src/pages/blog/index.vue           |  88 ++++++---
 src/pages/faq/index.vue            |  42 ++--
 src/pages/home.vue                 | 137 ++++---------
 src/pages/team.vue                 | 195 ------------------
 src/pages/team/team.vue            | 140 +++++++++++++
 src/pages/team/teamdata_en.js      | 131 ++++++++++++
 src/pages/team/teamdata_zh.js      | 131 ++++++++++++
 src/router.js                      |   2 +-
 19 files changed, 1105 insertions(+), 575 deletions(-)

diff --git a/src/App.vue b/src/App.vue
index 6418c15..f777ea6 100644
--- a/src/App.vue
+++ b/src/App.vue
@@ -1,195 +1,235 @@
 <script setup>
-// This starter template is using Vue 3 <script setup> SFCs
-// Check out https://v3.vuejs.org/api/sfc-script-setup.html#sfc-script-setup
-import { ref } from "vue";
-
-// 初始化语言
-const lang = ref(localStorage.getItem('locale'));
-
-// 切换语言
-const switchLang = (lang) => {
-  localStorage.setItem('locale', lang);
-  location.reload();
-}
+    // This starter template is using Vue 3 <script setup> SFCs
+    // Check out https://v3.vuejs.org/api/sfc-script-setup.html#sfc-script-setup
+    import {ref} from "vue";
+    import systemConfiguration from "./js/config";
+    // 初始化语言
+    const lang = ref(localStorage.getItem('locale'));
+
+    // 切换语言
+    const switchLang = (lang) => {
+        localStorage.setItem('locale', lang);
+        location.reload();
+    }
 </script>
 
 <template>
-<div>
-  <nav class="nav">
-    <div class="ctn-block">
-      <div class="nav-logo">
-        Apache Linkis
-      </div>
-      <span class="nav-logo-badge">Incubating</span>
-      <div class="menu-list">
-        <router-link class="menu-item" to="/"><span class="label">{{$t('menu.item.home')}}</span></router-link>
-        <router-link class="menu-item" to="/docs/deploy/linkis"><span class="label">{{$t('menu.item.docs')}}</span></router-link>
-        <router-link class="menu-item" to="/faq/index"><span class="label">{{$t('menu.item.faq')}}</span></router-link>
-        <router-link class="menu-item" to="/download"><span class="label">{{$t('menu.item.download')}}</span></router-link>
-        <router-link class="menu-item" to="/blog"><span class="label">{{$t('menu.item.blog')}}</span></router-link>
-        <router-link class="menu-item" to="/team"><span class="label">{{$t('menu.item.team')}}</span></router-link>
-        <div class="menu-item language">
-          Language
-          <div class="dropdown-menu">
-            <ul class="dropdown-menu-ctn">
-              <li class="dropdown-menu-item" :class="{active: lang === 'zh-CN'}" @click="switchLang('zh-CN')">简体中文</li>
-              <li class="dropdown-menu-item" :class="{active: lang === 'en'}" @click="switchLang('en')">English</li>
-            </ul>
-          </div>
-        </div>
-      </div>
+    <div>
+        <nav class="nav">
+            <div class="ctn-block">
+                <div class="nav-logo">
+                    Apache Linkis
+                </div>
+                <span class="nav-logo-badge">Incubating</span>
+                <div class="menu-list">
+                    <router-link class="menu-item" to="/"><span class="label">{{$t('menu.item.home')}}</span>
+                    </router-link>
+                    <router-link class="menu-item" to="/docs/deploy/linkis"><span
+                            class="label">{{$t('menu.item.docs')}}</span></router-link>
+                    <router-link class="menu-item" to="/faq/index"><span class="label">{{$t('menu.item.faq')}}</span>
+                    </router-link>
+                    <router-link class="menu-item" to="/download"><span
+                            class="label">{{$t('menu.item.download')}}</span></router-link>
+                    <router-link class="menu-item" to="/blog"><span class="label">{{$t('menu.item.blog')}}</span>
+                    </router-link>
+                    <router-link class="menu-item" to="/team"><span class="label">{{$t('menu.item.team')}}</span>
+                    </router-link>
+                    <div class="menu-item language">
+                        Language
+                        <div class="dropdown-menu">
+                            <ul class="dropdown-menu-ctn">
+                                <li class="dropdown-menu-item" :class="{active: lang === 'zh-CN'}"
+                                    @click="switchLang('zh-CN')">简体中文
+                                </li>
+                                <li class="dropdown-menu-item" :class="{active: lang === 'en'}"
+                                    @click="switchLang('en')">English
+                                </li>
+                            </ul>
+                        </div>
+                    </div>
+                </div>
+            </div>
+        </nav>
+        <router-view></router-view>
+        <footer class="footer">
+            <div class="ctn-block">
+                <div class="footer-links-row">
+                    <div class="footer-links">
+                        <h3 class="links-title">Linkis</h3>
+                        <a href="/#/docs/deploy/linkis" class="links-item">{{$t('menu.links.documentation')}}</a>
+                        <a href="/#/blog" class="links-item">{{$t('menu.links.events')}}</a>
+                        <a :href="systemConfiguration.github.projectReleaseUrl" class="links-item">{{$t('menu.links.releases')}}</a>
+                    </div>
+                    <div class="footer-links">
+                        <h3 class="links-title">{{$t('menu.links.community')}}</h3>
+                        <a :href="systemConfiguration.github.projectUrl" class="links-item">GitHub</a>
+                        <a :href="systemConfiguration.github.projectIssueUrl" class="links-item">{{$t('menu.links.issue_tracker')}}</a>
+                        <a :href="systemConfiguration.github.projectPrUrl" class="links-item">{{$t('menu.links.pull_requests')}}</a>
+                    </div>
+                    <div class="footer-links">
+                        <h3 class="links-title">{{$t('menu.links.asf')}}</h3>
+                        <a href="https://www.apache.org/" class="links-item">{{$t('menu.links.foundation')}}</a>
+                        <a href="https://www.apache.org/licenses/LICENSE-2.0" class="links-item">{{$t('menu.links.license')}}</a>
+                        <a href="https://www.apache.org/foundation/sponsorship.html" class="links-item">{{$t('menu.links.sponsorship')}}</a>
+                        <a href="http://www.apache.org/foundation/thanks.html" class="links-item">{{$t('menu.links.thanks')}}</a>
+                    </div>
+                </div>
+                <p class="footer-desc">Apache Linkis (Incubating) is an effort undergoing incubation at The Apache
+                    Software Foundation, sponsored by the Apache Incubator. Incubation is required of all newly accepted
+                    projects until a further review indicates that the infrastructure, communications, and decision
+                    making process have stabilized in a manner consistent with other successful ASF projects. While
+                    incubation status is not necessarily a reflection of the completeness or stability of the code, it
+                    does indicate that the project has yet to be fully endorsed by the ASF.</p>
+                <p class="footer-desc text-center">Copyright © 2021 The Apache Software Foundation. Apache Linkis,
+                    Apache Incubator, Linkis, Apache, the Apache feather logo, the Apache<br>Linkis logo and the Apache
+                    Incubator project logo are trademarks of The Apache Software Foundation.</p>
+            </div>
+        </footer>
     </div>
-  </nav>
-  <router-view></router-view>
-  <footer class="footer">
-    <div class="ctn-block">
-      <div class="footer-links-row">
-        <div class="footer-links">
-          <h3 class="links-title">Linkis</h3>
-          <a href="" class="links-item">{{$t('menu.links.documentation')}}</a>
-          <a href="" class="links-item">{{$t('menu.links.events')}}</a>
-          <a href="" class="links-item">{{$t('menu.links.releases')}}</a>
-        </div>
-        <div class="footer-links">
-          <h3 class="links-title">{{$t('menu.links.community')}}</h3>
-          <a href="" class="links-item">GitHub</a>
-          <a href="" class="links-item">{{$t('menu.links.issue_tracker')}}</a>
-          <a href="" class="links-item">{{$t('menu.links.pull_requests')}}</a>
-        </div>
-        <div class="footer-links">
-          <h3 class="links-title">{{$t('menu.links.asf')}}</h3>
-          <a href="" class="links-item">{{$t('menu.links.foundation')}}</a>
-          <a href="" class="links-item">{{$t('menu.links.license')}}</a>
-          <a href="" class="links-item">{{$t('menu.links.sponsorship')}}</a>
-          <a href="" class="links-item">{{$t('menu.links.thanks')}}</a>
-        </div>
-      </div>
-      <p class="footer-desc">Apache Linkis (Incubating) is an effort undergoing incubation at The Apache Software Foundation, sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code [...]
-      <p class="footer-desc text-center">Copyright © 2021 The Apache Software Foundation. Apache Linkis, Apache Incubator, Linkis, Apache, the Apache feather logo, the Apache<br>Linkis logo and the Apache Incubator project logo are trademarks of The Apache Software Foundation.</p>
-    </div>
-  </footer>
-</div>
 </template>
 
 <style lang="less">
-@import url('/src/style/base.less');
-.nav{
-  font-size: 16px;
-  box-shadow: 0 2px 4px rgba(15,18,34,0.2);
-  color: @enhance-color;
-  .ctn-block{
-    display: flex;
-    align-items: center;
-  }
-  .nav-logo{
-    line-height: 54px;
-    font-weight: 500;
-  }
-  .nav-logo-badge{
-    display: inline-block;
-    margin-left: 4px;
-    padding: 0 8px;
-    line-height: 24px;
-    background: #E8E8E8;
-    border-radius: 4px;
-    font-size: 12px;
-    font-weight: 400;
-  }
-  .menu-list{
-    flex: 1;
-    display: flex;
-    justify-content: flex-end;
-    .menu-item{
-      margin-left: 16px;
-      margin-right: 16px;
-      line-height: 52px;
-      border-bottom: 2px solid transparent;
-      transition: all ease .2s;
-      cursor: pointer;
-      &:hover,
-      &.router-link-exact-active{
-        .label{
-          color: @active-color;
-        }
-        border-color: @active-color;
-      }
-      &.language{
-        position: relative;
-        &::after{
-          content: '';
-          display: inline-block;
-          vertical-align: middle;
-          width: 0;
-          height: 0;
-          margin-left: 8px;
-          border-bottom: 6px solid #ccc;
-          border-left: 4px solid transparent;
-          border-right: 4px solid transparent;
-          transition: all ease .2s;
+    @import url('/src/style/base.less');
+
+    .nav {
+        font-size: 16px;
+        box-shadow: 0 2px 4px rgba(15, 18, 34, 0.2);
+        color: @enhance-color;
+
+        .ctn-block {
+            display: flex;
+            align-items: center;
         }
-        &:hover{
-          &::after{
-            transform: rotate(180deg);
-          }
-          .dropdown-menu{
-            display: block;
-          }
+
+        .nav-logo {
+            line-height: 54px;
+            font-weight: 500;
         }
-        .dropdown-menu{
-          display: none;
-          position: absolute;
-          z-index: 10;
-          top: 20px;
-          left: 0;
-          padding-top: 40px;
-          .dropdown-menu-ctn{
-            padding: 10px 0;
-            background: #fff;
+
+        .nav-logo-badge {
+            display: inline-block;
+            margin-left: 4px;
+            padding: 0 8px;
+            line-height: 24px;
+            background: #E8E8E8;
             border-radius: 4px;
-            border: 1px solid #FFFFFF;
-            box-shadow: 0 2px 12px 0 rgba(15,18,34,0.10);
-            .dropdown-menu-item{
-              font-size: 14px;
-              line-height: 32px;
-              padding: 0 16px;
-              cursor: pointer;
-              &.active,
-              &:hover{
-                color: @active-color;
-              }
+            font-size: 12px;
+            font-weight: 400;
+        }
+
+        .menu-list {
+            flex: 1;
+            display: flex;
+            justify-content: flex-end;
+
+            .menu-item {
+                margin-left: 16px;
+                margin-right: 16px;
+                line-height: 52px;
+                border-bottom: 2px solid transparent;
+                transition: all ease .2s;
+                cursor: pointer;
+
+                &:hover,
+                &.router-link-exact-active {
+                    .label {
+                        color: @active-color;
+                    }
+
+                    border-color: @active-color;
+                }
+
+                &.language {
+                    position: relative;
+
+                    &::after {
+                        content: '';
+                        display: inline-block;
+                        vertical-align: middle;
+                        width: 0;
+                        height: 0;
+                        margin-left: 8px;
+                        border-bottom: 6px solid #ccc;
+                        border-left: 4px solid transparent;
+                        border-right: 4px solid transparent;
+                        transition: all ease .2s;
+                    }
+
+                    &:hover {
+                        &::after {
+                            transform: rotate(180deg);
+                        }
+
+                        .dropdown-menu {
+                            display: block;
+                        }
+                    }
+
+                    .dropdown-menu {
+                        display: none;
+                        position: absolute;
+                        z-index: 10;
+                        top: 20px;
+                        left: 0;
+                        padding-top: 40px;
+
+                        .dropdown-menu-ctn {
+                            padding: 10px 0;
+                            background: #fff;
+                            border-radius: 4px;
+                            border: 1px solid #FFFFFF;
+                            box-shadow: 0 2px 12px 0 rgba(15, 18, 34, 0.10);
+
+                            .dropdown-menu-item {
+                                font-size: 14px;
+                                line-height: 32px;
+                                padding: 0 16px;
+                                cursor: pointer;
+
+                                &.active,
+                                &:hover {
+                                    color: @active-color;
+                                }
+                            }
+                        }
+                    }
+                }
             }
-          }
         }
-      }
     }
-  }
-}
-.footer{
-  padding-top: 40px;
-  background: #F9FAFB;
-  .footer-desc{
-    padding: 0 20px 30px;
-    color: #999999;
-    font-weight: 400;
-  }
-  .footer-links-row{
-    display: flex;
-    font-size: 16px;
-    .footer-links{
-      flex: 1;
-      padding: 20px;
-      .links-title{
-        margin-bottom: 16px;
-      }
-      .links-item{
-        display: block;
-        margin-bottom: 10px;
-        color: rgba(15,18,34,0.65);
-        &:hover{
-          text-decoration: underline;
+
+    .footer {
+        padding-top: 40px;
+        background: #F9FAFB;
+
+        .footer-desc {
+            padding: 0 20px 30px;
+            color: #999999;
+            font-weight: 400;
+        }
+
+        .footer-links-row {
+            display: flex;
+            font-size: 16px;
+
+            .footer-links {
+                flex: 1;
+                padding: 20px;
+
+                .links-title {
+                    margin-bottom: 16px;
+                }
+
+                .links-item {
+                    display: block;
+                    margin-bottom: 10px;
+                    color: rgba(15, 18, 34, 0.65);
+
+                    &:hover {
+                        text-decoration: underline;
+                    }
+                }
+            }
         }
-      }
     }
-  }
-}
 </style>
diff --git a/src/assets/image/github_user.png b/src/assets/image/github_user.png
new file mode 100644
index 0000000..e57e9dd
Binary files /dev/null and b/src/assets/image/github_user.png differ
diff --git a/src/i18n/en.json b/src/i18n/en.json
index 479851b..1335b9d 100644
--- a/src/i18n/en.json
+++ b/src/i18n/en.json
@@ -1,33 +1,63 @@
 {
   "message": {
-    "common": {},
+    "common": {
+      "get_start": "Get Start",
+      "description": "Description",
+      "learn_more": "Learn More",
+      "core_features": "Core Features",
+      "connectivity": "Connectivity",
+      "scalability": "Scalability",
+      "controllability": "Controllability",
+      "orchestration": "Orchestration",
+      "reusability": "Reusability",
+      "our_users": "Our Users",
+      "read_more": "Read More"
+    },
     "home": {
       "banner": {
         "slogan": "Decouple the upper applications and the underlying data engines by building a computation middleware layer."
+      },
+      "introduce": {
+        "title": "Computation Governance Concept",
+        "before": "before",
+        "after": "after",
+        "before_text": "Each upper application directly connects to and accesses various underlying engines in a tightly coupled way, which makes big data platform a complex network architecture.",
+        "after_text": "Build a common layer of \"computation middleware\" between the numerous upper-layer applications and the countless underlying engines to resolve these complex connection problems in a standardized reusable way\n"
+      },
+      "description": {
+        "paragraph1": "Linkis provides standardized interfaces (REST, JDBC, WebSocket etc.) to easily connect to various underlying engines (Spark, Presto, Flink, etc.), and acts as a proxy between the upper applications layer and underlying engines layer.",
+        "paragraph2": "Linkis is able to facilitate the connectivity, governance and orchestration capabilities of different kind of engines like OLAP, OLTP (developing), Streaming, and handle all these \"computation governance\" affairs in a standardized reusable way."
+      },
+      "core": {
+        "connectivity": "Simplify the operation environment; decouple the upper and lower layers, which make the upper layer insensitive when bottom layers changed",
+        "scalability": "Distributed microservice architecture with great scalability and extensibility; quickly integrate with the new underlying engine",
+        "controllability": "Converge engine entrance, unify identity verification, high-risk prevention and control, audit records; label-based multi-level refined resource control and recovery capabilities",
+        "orchestration": "Computing strategy design based on active-active, mixed computing, transcation Orchestrator Service",
+        "reusability": "Highly reduced the back-end development workload of upper-level applications development; Swiftly and efficiently build a data platform tool suite based on Linkis"
       }
     }
   },
   "menu": {
-    "item":{
+    "item": {
       "home": "Home",
-      "docs":"Docs",
+      "docs": "Docs",
       "faq": "FAQ",
-      "download":"Download",
-      "blog":"Blog",
-      "team":"Team"
+      "download": "Download",
+      "blog": "Blog",
+      "team": "Team"
     },
-    "links":{
-      "documentation":"Documentation",
-      "events":"Events",
-      "releases":"Releases",
-      "community":"Community",
-      "issue_tracker":"Issue Tracker",
-      "pull_requests":"Pull Requests",
-      "asf":"Apache Software Foundation",
-      "foundation":"Foundation",
-      "license":"License",
-      "sponsorship":"Sponsorship",
-      "thanks":"Thanks"
+    "links": {
+      "documentation": "Documentation",
+      "events": "Events",
+      "releases": "Releases",
+      "community": "Community",
+      "issue_tracker": "Issue Tracker",
+      "pull_requests": "Pull Requests",
+      "asf": "Apache Software Foundation",
+      "foundation": "Foundation",
+      "license": "License",
+      "sponsorship": "Sponsorship",
+      "thanks": "Thanks"
     }
   }
 }
diff --git a/src/i18n/zh.json b/src/i18n/zh.json
index 178f663..74c4942 100644
--- a/src/i18n/zh.json
+++ b/src/i18n/zh.json
@@ -1,9 +1,39 @@
 {
   "message": {
-    "common": {},
+    "common": {
+      "get_start": "开始",
+      "description": "描述",
+      "learn_more": "了解更多",
+      "core_features": "核心功能",
+      "connectivity": "连通性",
+      "scalability": "可扩展性",
+      "controllability": "可控性",
+      "orchestration": "编排",
+      "reusability": "可复用性",
+      "our_users": "我们的用户",
+      "read_more": "阅读更多"
+    },
     "home": {
       "banner": {
         "slogan": "通过构建计算中间件层来解耦上层应用程序和底层数据引擎"
+      },
+      "introduce": {
+        "title": "计算治理理念",
+        "before": "没有Linkis之前",
+        "after": "有Linkis之后",
+        "before_text": "每个上层应用以紧耦合的方式直接连接和访问各种底层引擎,这使得大数据平台成为一个复杂的网络架构",
+        "after_text": "在丰富的上层应用和丰富的底层引擎之间构建一个公共的“计算中间件”层,以标准化的可复用方式解决这些复杂的连接问题"
+      },
+      "description": {
+        "paragraph1": "Linkis 提供标准化接口(REST、JDBC、WebSocket 等),方便连接各种底层引擎(Spark、Presto、Flink 等),充当上层应用层和底层引擎层之间的代理",
+        "paragraph2": "Linkis 能够促进 OLAP、OLTP(开发)、Streaming 等不同类型引擎的连接、治理和编排能力,并以标准化的可重用方式处理所有这些“计算治理”事务."
+      },
+      "core": {
+        "connectivity": "简化操作环境;上层和下层解耦,使上层在底层变化时不敏感",
+        "scalability": "分布式微服务架构,具有很好的可伸缩性和扩展性;快速与新的底层引擎集成",
+        "controllability": "融合引擎入口,统一身份验证,高风险防控,审计记录;基于标签的多级精细化资源控制和恢复能力",
+        "orchestration": "基于双活、混合计算、事务Orchestrator服务的计算策略设计",
+        "reusability": "大大减少了上层应用开发的后端开发工作量;可基于Linkis快速高效搭建数据平台工具套件"
       }
     }
   },
@@ -30,4 +60,4 @@
       "thanks":"致谢"
     }
   }
-}
\ No newline at end of file
+}
diff --git a/src/js/config.js b/src/js/config.js
new file mode 100644
index 0000000..8ddcc56
--- /dev/null
+++ b/src/js/config.js
@@ -0,0 +1,9 @@
+const  systemConfiguration={
+    github:{
+        "projectUrl":"https://github.com/apache/incubator-linkis",
+        "projectReleaseUrl":"https://github.com/apache/incubator-linkis/releases",
+        "projectIssueUrl":"https://github.com/apache/incubator-linkis/issues",
+        "projectPrUrl":"https://github.com/apache/incubator-linkis/pulls",
+    },
+}
+export  default systemConfiguration
diff --git a/src/js/utils.js b/src/js/utils.js
new file mode 100644
index 0000000..3dff972
--- /dev/null
+++ b/src/js/utils.js
@@ -0,0 +1,10 @@
+const utils = {
+      //拼接
+      concatStr(first, con, second) {
+        return first + con + second + "";
+      }
+
+}
+;
+
+export default utils;
diff --git a/src/pages/blog/AddEngineConn_en.md b/src/pages/blog/AddEngineConn_en.md
new file mode 100644
index 0000000..5ce15fe
--- /dev/null
+++ b/src/pages/blog/AddEngineConn_en.md
@@ -0,0 +1,105 @@
+# How to add an EngineConn
+
+Adding EngineConn is one of the core processes of the computing task preparation phase of Linkis computing governance. It mainly includes the following steps. First, client side (Entrance or user client) initiates a request for a new EngineConn to LinkisManager . Then LinkisManager initiates a request to EngineConnManager to start EngineConn based on demands and label rules. Finally,  LinkisManager returns the usable EngineConn to the client side.
+
+Based on the figure below, let's explain the whole process in detail:
+
+![Process of adding a EngineConn](../../assets/docs/architecture/add_an_engineConn_flow_chart.png)
+
+## 1. LinkisManger receives the requests from client side
+
+**Glossary:**
+
+- LinkisManager: The management center of Linkis computing governance capabilities. Its main responsibilities are:
+  1. Based on multi-level combined tags, provide users with available EngineConn after complex routing, resource management and load balancing.
+
+  2. Provide EC and ECM full life cycle management capabilities.
+
+  3. Provide users with multi-Yarn cluster resource management functions based on multi-level combined tags. It is mainly divided into three modules: AppManager, ResourceManager and LabelManager , which can support multi-active deployment and have the characteristics of high availability and easy expansion.
+
+After the AM module receives the Client’s new EngineConn request, it first checks the parameters of the request to determine the validity of the request parameters. Secondly, selects the most suitable EngineConnManager (ECM) through complex rules for use in the subsequent EngineConn startup. Next, it will apply to RM for the resources needed to start the EngineConn, Finally, it will request the ECM to create an EngineConn.
+
+The four steps will be described in detail below.
+
+### 1. Request parameter verification
+
+After the AM module receives the engine creation request, it will check the parameters. First, it will check the permissions of the requesting user and the creating user, and then check the Label attached to the request. Since in the subsequent creation process of AM, Label will be used to find ECM and perform resource information recording, etc, you need to ensure that you have the necessary Label. At this stage, you must bring the Label with UserCreatorLabel (For example: hadoop-IDE) a [...]
+
+### 2. Select  a EngineConnManager(ECM)
+
+ECM selection is mainly to complete the Label passed through the client to select a suitable ECM service to start EngineConn. In this step, first, the LabelManager will be used to search in the registered ECM through the Label passed by the client, and return in the order according to the label matching degree. After obtaining the registered ECM list, rules will be selected for these ECMs. At this stage, rules such as availability check, resource surplus, and machine load have been imple [...]
+
+### 3. Apply resources required for EngineConn
+
+1. After obtaining the assigned ECM, AM will then request how many resources will be used by the client's engine creation request by calling the EngineConnPluginServer service. Here, the resource request will be encapsulated, mainly including Label, the EngineConn startup parameters passed by the Client, and the user configuration parameters obtained from the Configuration module. The resource information is obtained by calling the ECP service through RPC.
+
+2. After the EngineConnPluginServer service receives the resource request, it will first find the corresponding engine tag through the passed tag, and select the EngineConnPlugin of the corresponding engine through the engine tag. Then use EngineConnPlugin's resource generator to calculate the engine startup parameters passed in by the client, calculate the resources required to apply for a new EngineConn this time, and then return it to LinkisManager. 
+
+   **Glossary:**
+
+- EgineConnPlugin: It is the interface that Linkis must implement when connecting a new computing storage engine. This interface mainly includes several capabilities that this EngineConn must provide during the startup process, including EngineConn resource generator, EngineConn startup command generator, EngineConn engine connection Device. Please refer to the Spark engine implementation class for the specific implementation: [SparkEngineConnPlugin](https://github.com/WeBankFinTech/Link [...]
+- EngineConnPluginServer: It is a microservice that loads all the EngineConnPlugins and provides externally the required resource generation capabilities of EngineConn and EngineConn's startup command generation capabilities.
+- EngineConnResourceFactory: Calculate the total resources needed when EngineConn starts this time through the parameters passed in.
+- EngineConnLaunchBuilder: Through the incoming parameters, a startup command of the EngineConn is generated to provide the ECM to start the engine.
+3. After AM obtains the engine resources, it will then call the RM service to apply for resources. The RM service will use the incoming Label, ECM, and the resources applied for this time to make resource judgments. First, it will judge whether the resources of the client corresponding to the Label are sufficient, and then judge whether the resources of the ECM service are sufficient, if the resources are sufficient, the resource application is approved this time, and the resources of th [...]
+
+### 4. Request ECM for engine creation
+
+1. After completing the resource application for the engine, AM will encapsulate the engine startup request, send it to the corresponding ECM via RPC for service startup, and obtain the instance object of EngineConn.
+2. AM will then determine whether EngineConn is successfully started and become available through the reported information of EngineConn. If it is, the result will be returned, and the process of adding an engine this time will end.
+
+## 2. ECM initiates EngineConn
+
+**Glossary:**
+
+- EngineConnManager: EngineConn's manager. Provides engine life-cycle management, and at the same time reports load information and its own health status to RM.
+- EngineConnBuildRequest: The start engine command passed by LinkisManager to ECM, which encapsulates all tag information, required resources and some parameter configuration information of the engine.
+- EngineConnLaunchRequest: Contains the BML materials, environment variables, ECM required local environment variables, startup commands and other information required to start an EngineConn, so that ECM can build a complete EngineConn startup script based on this.
+
+After ECM receives the EngineConnBuildRequest command passed by LinkisManager, it is mainly divided into three steps to start EngineConn: 
+
+1. Request EngineConnPluginServer to obtain EngineConnLaunchRequest encapsulated by EngineConnPluginServer. 
+2.  Parse EngineConnLaunchRequest and encapsulate it into EngineConn startup script.
+3.  Execute startup script to start EngineConn.
+
+### 2.1 EngineConnPluginServer encapsulates EngineConnLaunchRequest
+
+Get the EngineConn type and corresponding version that actually needs to be started through the label information of EngineConnBuildRequest, get the EngineConnPlugin of the EngineConn type from the memory of EngineConnPluginServer, and convert the EngineConnBuildRequest into EngineConnLaunchRequest through the EngineConnLaunchBuilder of the EngineConnPlugin.
+
+### 2.2 Encapsulate EngineConn startup script
+
+After the ECM obtains the EngineConnLaunchRequest, it downloads the BML materials in the EngineConnLaunchRequest to the local, and checks whether the local necessary environment variables required by the EngineConnLaunchRequest exist. After the verification is passed, the EngineConnLaunchRequest is encapsulated into an EngineConn startup script.
+
+### 2.3 Execute startup script
+
+Currently, ECM only supports Bash commands for Unix systems, that is, only supports Linux systems to execute the startup script.
+
+Before startup, the sudo command is used to switch to the corresponding requesting user to execute the script to ensure that the startup user (ie, JVM user) is the requesting user on the Client side.
+
+After the startup script is executed, ECM will monitor the execution status and execution log of the script in real time. Once the execution status returns to non-zero, it will immediately report EngineConn startup failure to LinkisManager and the entire process is complete; otherwise, it will keep monitoring the log and status of the startup script until The script execution is complete.
+
+## 3. EngineConn initialization
+
+After ECM executed EngineConn's startup script, EngineConn microservice was officially launched.
+
+**Glossary:**
+
+- EngineConn microservice: Refers to the actual microservices that include an EngineConn and one or more Executors to provide computing power for computing tasks. When we talk about adding an EngineConn, we actually mean adding an EngineConn microservice.
+- EngineConn: The engine connector is the actual connection unit with the underlying computing storage engine, and contains the session information with the actual engine. The difference between it and Executor is that EngineConn only acts as a connection and a client, and does not actually perform calculations. For example, SparkEngineConn, its session information is SparkSession.
+- Executor: As a real computing storage scenario executor, it is the actual computing storage logic execution unit. It abstracts the various capabilities of EngineConn and provides multiple different architectural capabilities such as interactive execution, subscription execution, and responsive execution.
+
+The initialization of EngineConn microservices is generally divided into three stages:
+
+1. Initialize the EngineConn of the specific engine. First use the command line parameters of the Java main method to encapsulate an EngineCreationContext that contains relevant label information, startup information, and parameter information, and initialize EngineConn through EngineCreationContext to complete the establishment of the connection between EngineConn and the underlying Engine, such as: SparkEngineConn will initialize one at this stage SparkSession is used to establish a co [...]
+2. Initialize the Executor. After the EngineConn is initialized, the corresponding Executor will be initialized according to the actual usage scenario to provide service capabilities for subsequent users. For example, the SparkEngineConn in the interactive computing scenario will initialize a series of Executors that can be used to submit and execute SQL, PySpark, and Scala code capabilities, and support the Client to submit and execute SQL, PySpark, Scala and other codes to the SparkEng [...]
+3. Report the heartbeat to LinkisManager regularly, and wait for EngineConn to exit. When the underlying engine corresponding to EngineConn is abnormal, or exceeds the maximum idle time, or Executor is executed, or the user manually kills, the EngineConn automatically ends and exits.
+
+----
+
+At this point, The process of how to add a new EngineConn is basically over. Finally, let's make a summary:
+
+- The client initiates a request for adding EngineConn to LinkisManager.
+- LinkisManager checks the legitimacy of the parameters, first selects the appropriate ECM according to the label, then confirms the resources required for this new EngineConn according to the user's request, applies for resources from the RM module of LinkisManager, and requires ECM to start a new EngineConn as required after the application is passed.
+- ECM first requests EngineConnPluginServer to obtain an EngineConnLaunchRequest containing BML materials, environment variables, ECM required local environment variables, startup commands and other information needed to start an EngineConn, and then encapsulates the startup script of EngineConn, and finally executes the startup script to start the EngineConn.
+- EngineConn initializes the EngineConn of a specific engine, and then initializes the corresponding Executor according to the actual usage scenario, and provides service capabilities for subsequent users. Finally, report the heartbeat to LinkisManager regularly, and wait for the normal end or termination by the user.
+
diff --git a/src/pages/blog/AddEngineConn_zh.md b/src/pages/blog/AddEngineConn_zh.md
new file mode 100644
index 0000000..bb6a88f
--- /dev/null
+++ b/src/pages/blog/AddEngineConn_zh.md
@@ -0,0 +1,111 @@
+# EngineConn新增流程
+
+EngineConn的新增,是Linkis计算治理的计算任务准备阶段的核心流程之一。它主要包括了Client端(Entrance或用户客户端)向LinkisManager发起一个新增EngineConn的请求,LinkisManager为用户按需、按标签规则,向EngineConnManager发起一个启动EngineConn的请求,并等待EngineConn启动完成后,将可用的EngineConn返回给Client的整个流程。
+
+如下图所示,接下来我们来详细说明一下整个流程:
+
+![EngineConn新增流程](../../assets/docs/architecture/add_an_engineConn_flow_chart.png)
+
+## 一、LinkisManager接收客户端请求
+
+**名词解释**:
+
+- LinkisManager:是Linkis计算治理能力的管理中枢,主要的职责为:
+  1. 基于多级组合标签,为用户提供经过复杂路由、资源管控和负载均衡后的可用EngineConn;
+  
+  2. 提供EC和ECM的全生命周期管理能力;
+  
+  3. 为用户提供基于多级组合标签的多Yarn集群资源管理功能。主要分为 AppManager(应用管理器)、ResourceManager(资源管理器)、LabelManager(标签管理器)三大模块,能够支持多活部署,具备高可用、易扩展的特性。
+
+&nbsp;&nbsp;&nbsp;&nbsp;AM模块接收到Client的新增EngineConn请求后,首先会对请求做参数校验,判断请求参数的合法性;其次是通过复杂规则选中一台最合适的EngineConnManager(ECM),以用于后面的EngineConn启动;接下来会向RM申请启动该EngineConn需要的资源;最后是向ECM请求创建EngineConn。
+
+下面将对四个步骤进行详细说明。
+
+### 1. 请求参数校验
+
+&nbsp;&nbsp;&nbsp;&nbsp;AM模块在接受到引擎创建请求后首先会做参数判断,首先会做请求用户和创建用户的权限判断,接着会对请求带上的Label进行检查。因为在AM后续的创建流程当中,Label会用来查找ECM和进行资源信息记录等,所以需要保证拥有必须的Label,现阶段一定需要带上的Label有UserCreatorLabel(例:hadoop-IDE)和EngineTypeLabel(例:spark-2.4.3)。
+
+### 2. EngineConnManager(ECM)选择
+
+&nbsp;&nbsp;&nbsp;&nbsp;ECM选择主要是完成通过客户端传递过来的Label去选择一个合适的ECM服务去启动EngineConn。这一步中首先会通过LabelManager去通过客户端传递过来的Label去注册的ECM中进行查找,通过按照标签匹配度进行顺序返回。在获取到注册的ECM列表后,会对这些ECM进行规则选择,现阶段已经实现有可用性检查、资源剩余、机器负载等规则。通过规则选择后,会将标签最匹配、资源最空闲、负载低的ECM进行返回。
+
+### 3. EngineConn资源申请
+
+1. 在获取到分配的ECM后,AM接着会通过调用EngineConnPluginServer服务请求本次客户端的引擎创建请求会使用多少的资源,这里会通过封装资源请求,主要包含Label、Client传递过来的EngineConn的启动参数、以及从Configuration模块获取到用户配置参数,通过RPC调用ECP服务去获取本次的资源信息。
+
+2. EngineConnPluginServer服务在接收到资源请求后,会先通过传递过来的标签找到对应的引擎标签,通过引擎标签选择对应引擎的EngineConnPlugin。然后通过EngineConnPlugin的资源生成器,对客户端传入的引擎启动参数进行计算,算出本次申请新EngineConn所需的资源,然后返回给LinkisManager。
+   
+   **名词解释:**
+- EgineConnPlugin:是Linkis对接一个新的计算存储引擎必须要实现的接口,该接口主要包含了这种EngineConn在启动过程中必须提供的几个接口能力,包括EngineConn资源生成器、EngineConn启动命令生成器、EngineConn引擎连接器。具体的实现可以参考Spark引擎的实现类:[SparkEngineConnPlugin](https://github.com/WeBankFinTech/Linkis/blob/master/linkis-engineconn-plugins/engineconn-plugins/spark/src/main/scala/com/webank/wedatasphere/linkis/engineplugin/spark/SparkEngineConnPlugin.scala)。
+
+- EngineConnPluginServer:是加载了所有的EngineConnPlugin,对外提供EngineConn的所需资源生成能力和EngineConn的启动命令生成能力的微服务。
+
+- EngineConnPlugin资源生成器(EngineConnResourceFactory):通过传入的参数,计算出本次EngineConn启动时需要的总资源。
+
+- EngineConn启动命令生成器(EngineConnLaunchBuilder):通过传入的参数,生成该EngineConn的启动命令,以提供给ECM去启动引擎。
+3. AM在获取到引擎资源后,会接着调用RM服务去申请资源,RM服务会通过传入的Label、ECM、本次申请的资源,去进行资源判断。首先会判断客户端对应Label的资源是否足够,然后再会判断ECM服务的资源是否足够,如果资源足够,则本次资源申请通过,并对对应的Label进行资源的加减。
+
+### 4. 请求ECM创建引擎
+
+1. 在完成引擎的资源申请后,AM会封装引擎启动的请求,通过RPC发送给对应的ECM进行服务启动,并获取到EngineConn的实例对象;
+2. AM接着会去通过EngineConn的上报信息判断EngineConn是否启动成功变成可用状态,如果是就会将结果进行返回,本次新增引擎的流程也就结束。
+
+## 二、 ECM启动EngineConn
+
+名词解释:
+
+- EngineConnManager(ECM):EngineConn的管理器,提供引擎的生命周期管理,同时向RM汇报负载信息和自身的健康状况。
+
+- EngineConnBuildRequest:LinkisManager传递给ECM的启动引擎命令,里面封装了该引擎的所有标签信息、所需资源和一些参数配置信息。
+
+- EngineConnLaunchRequest:包含了启动一个EngineConn所需的BML物料、环境变量、ECM本地必需环境变量、启动命令等信息,让ECM可以依此构建出一个完整的EngineConn启动脚本。
+
+ECM接收到LinkisManager传递过来的EngineConnBuildRequest命令后,主要分为三步来启动EngineConn:1. 请求EngineConnPluginServer,获取EngineConnPluginServer封装出的EngineConnLaunchRequest;2. 解析EngineConnLaunchRequest,封装成EngineConn启动脚本;3. 执行启动脚本,启动EngineConn。
+
+### 2.1 EngineConnPluginServer封装EngineConnLaunchRequest
+
+通过EngineConnBuildRequest的标签信息,拿到实际需要启动的EngineConn类型和对应版本,从EngineConnPluginServer的内存中获取到该EngineConn类型的EngineConnPlugin,通过该EngineConnPlugin的EngineConnLaunchBuilder,将EngineConnBuildRequest转换成EngineConnLaunchRequest。
+
+### 2.2 封装EngineConn启动脚本
+
+ECM获取到EngineConnLaunchRequest之后,将EngineConnLaunchRequest中的BML物料下载到本地,并检查EngineConnLaunchRequest要求的本地必需环境变量是否存在,校验通过后,将EngineConnLaunchRequest封装成一个EngineConn启动脚本
+
+### 2.3 执行启动脚本
+
+目前ECM只对Unix系统做了Bash命令的支持,即只支持Linux系统执行该启动脚本。
+
+启动前,会通过sudo命令,切换到对应的请求用户去执行该脚本,确保启动用户(即JVM用户)为Client端的请求用户。
+
+执行该启动脚本后,ECM会实时监听脚本的执行状态和执行日志,一旦执行状态返回非0,则立马向LinkisManager汇报EngineConn启动失败,整个流程完成;否则则一直监听启动脚本的日志和状态,直到该脚本执行完成。
+
+## 三、EngineConn初始化
+
+ECM执行了EngineConn的启动脚本后,EngineConn微服务正式启动。
+
+名词解释:
+
+- EngineConn微服务:指包含了一个EngineConn、一个或多个Executor,用于对计算任务提供计算能力的实际微服务。我们说的新增一个EngineConn,其实指的就是新增一个EngineConn微服务。
+
+- EngineConn:引擎连接器,是与底层计算存储引擎的实际连接单元,包含了与实际引擎的会话信息。它与Executor的差别,是EngineConn只是起到一个连接、一个客户端的作用,并不真正的去执行计算。如SparkEngineConn,其会话信息为SparkSession。
+
+- Executor:执行器,作为真正的计算存储场景执行器,是实际的计算存储逻辑执行单元,对EngineConn各种能力的具体抽象,提供交互式执行、订阅式执行、响应式执行等多种不同的架构能力。
+
+EngineConn微服务的初始化一般分为三个阶段:
+
+1. 初始化具体引擎的EngineConn。先通过Java main方法的命令行参数,封装出一个包含了相关标签信息、启动信息和参数信息的EngineCreationContext,通过EngineCreationContext初始化EngineConn,完成EngineConn与底层Engine的连接建立,如:SparkEngineConn会在该阶段初始化一个SparkSession,用于与一个Spark application建立了连通关系。
+
+2. 初始化Executor。EngineConn初始化之后,接下来会根据实际的使用场景,初始化对应的Executor,为接下来的用户使用,提供服务能力。比如:交互式计算场景的SparkEngineConn,会初始化一系列可以用于提交执行SQL、PySpark、Scala代码能力的Executor,支持Client往该SparkEngineConn提交执行SQL、PySpark、Scala等代码。
+
+3. 定时向LinkisManager汇报心跳,并等待EngineConn结束退出。当EngineConn对应的底层引擎异常、或是超过最大空闲时间、或是Executor执行完成、或是用户手动kill时,该EngineConn自动结束退出。
+
+----
+
+到了这里,EngineConn的新增流程就基本结束了,最后我们再来总结一下EngineConn的新增流程:
+
+- 客户端向LinkisManager发起新增EngineConn的请求;
+
+- LinkisManager校验参数合法性,先是根据标签选择合适的ECM,再根据用户请求确认本次新增EngineConn所需的资源,向LinkisManager的RM模块申请资源,申请通过后要求ECM按要求启动一个新的EngineConn;
+
+- ECM先请求EngineConnPluginServer获取一个包含了启动一个EngineConn所需的BML物料、环境变量、ECM本地必需环境变量、启动命令等信息的EngineConnLaunchRequest,然后封装出EngineConn的启动脚本,最后执行启动脚本,启动该EngineConn;
+
+- EngineConn初始化具体引擎的EngineConn,然后根据实际的使用场景,初始化对应的Executor,为接下来的用户使用,提供服务能力。最后定时向LinkisManager汇报心跳,等待正常结束或被用户终止。
diff --git a/src/pages/blog/blogdata_en.js b/src/pages/blog/blogdata_en.js
new file mode 100644
index 0000000..9972d95
--- /dev/null
+++ b/src/pages/blog/blogdata_en.js
@@ -0,0 +1,13 @@
+const list =
+    [{
+        "id":"AddEngineConn",
+        "title": "Born at China’s WeBank, now incubating in the ASF - Introducing Apache Linkis",
+        "author": "enjoyyin",
+        "createTime": "2021-10-14",
+        "summary": "Guangsheng Chen, the founder of Apache EventMesh, has been buzzing since the project was welcomed into the Apache Software Foundation (ASF)’s incubator in February 2021. There’s a growing community supporting work on the open source software — used to decouple the application",
+        "readCost": "5 min",
+        "tag": "share",
+         "ref":"AddEngineConn"
+    }]
+
+export default list
diff --git a/src/pages/blog/blogdata_zh.js b/src/pages/blog/blogdata_zh.js
new file mode 100644
index 0000000..9cdd71f
--- /dev/null
+++ b/src/pages/blog/blogdata_zh.js
@@ -0,0 +1,13 @@
+const list =
+    [{
+        "id":"AddEngineConn",
+        "title": "測試",
+        "author": "新明",
+        "createTime": "2021-10-14",
+        "summary": "Guangsheng Chen, the founder of Apache EventMesh, has been buzzing since the project was welcomed into the Apache Software Foundation (ASF)’s incubator in February 2021. There’s a growing community supporting work on the open source software — used to decouple the application",
+        "readCost": "5 min",
+        "tag": "share",
+         "ref":"AddEngineConn"
+    }]
+
+export default list
diff --git a/src/pages/blog/event.vue b/src/pages/blog/event.vue
index 6de9d72..0c33a70 100644
--- a/src/pages/blog/event.vue
+++ b/src/pages/blog/event.vue
@@ -1,35 +1,38 @@
 <template>
   <div class="ctn-block reading-area blog-ctn">
     <main class="main-content">
-      <h1 class="blog-title">Born at China’s WeBank, now incubating in the ASF - Introducing Apache Linkis</h1>
+      <main class="main-content">
+        <docEn v-if="lang === 'en'"></docEn>
+        <docZh ></docZh>
+        <component :is="optionComponent"></component>
+      </main>
+<!--      <h1 class="blog-title">Born at China’s WeBank, now incubating in the ASF - Introducing Apache Linkis</h1>-->
       <!-- <div class="blog-info seperator"><span class="info-item">enjoyyin</span><span class="info-item">2021-9-2</span></div>
       <div class="blog-info seperator"><span class="info-item">5 min read</span><span class="info-item">tag</span></div> -->
     </main>
-    <div class="side-bar">
-      <router-link :to="doc.link" class="bar-item" v-for="(doc,index) in docs" :key="index">{{doc.title}}
-        <router-link :to="children.link" class="bar-item" v-for="(children,cindex) in doc.children" :key="cindex">
-          {{children.title}}
-        </router-link>
-      </router-link>
-    </div>
+<!--    <div class="side-bar">-->
+<!--      <router-link :to="doc.link" class="bar-item" v-for="(doc,index) in docs" :key="index">{{doc.title}}-->
+<!--        <router-link :to="children.link" class="bar-item" v-for="(children,cindex) in doc.children" :key="cindex">-->
+<!--          {{children.title}}-->
+<!--        </router-link>-->
+<!--      </router-link>-->
+<!--    </div>-->
   </div>
 </template>
-<script setup>
-  const docs = [{
-    title: '部署文档',
-    link: '/docs/deploy/linkis',
-    children: [{
-      title: '快速部署 Linkis1.0',
-      link: '/docs/deploy/linkis',
-    }, {
-      title: '快速安装 EngineConnPlugin 引擎插件',
-      link: '/docs/deploy/engins',
-    }, {
-      title: 'Linkis1.0 分布式部署手册',
-      link: '/docs/deploy/distributed',
-    }, {
-      title: 'Linkis1.0 安装包目录层级结构详解',
-      link: '/docs/deploy/structure',
-    }]
-  }, ];
-</script>
\ No newline at end of file
+<script >
+  import { defineAsyncComponent } from "vue";
+  export default {
+    computed: {
+          optionComponent() {
+              const id=this.$route.query.id;
+              const lang =localStorage.getItem('locale')=="en"?"en":"zh";
+              const path="./"+id+"_"+lang+".md";
+              console.log(path);
+              return defineAsyncComponent(() =>
+                  import(path)
+              );
+          }
+      }
+
+  }
+</script>
diff --git a/src/pages/blog/index.vue b/src/pages/blog/index.vue
index 072887c..d900c84 100644
--- a/src/pages/blog/index.vue
+++ b/src/pages/blog/index.vue
@@ -1,38 +1,64 @@
 <template>
-  <div class="ctn-block reading-area blog-ctn">
-    <main class="main-content">
-      <ul class="blog-list">
-        <li class="blog-item">
-          <h1 class="blog-title">Born at China’s WeBank, now incubating in the ASF - Introducing Apache Linkis</h1>
-          <div class="blog-info">
-            <span class="info-item">enjoyyin</span>
-            <span class="info-item sperator">|</span>
-            <span class="info-item">2021-9-2</span>
-          </div>
-          <p class="blog-preview">Guangsheng Chen, the founder of Apache EventMesh, has been buzzing since the project was welcomed into the Apache Software Foundation (ASF)’s incubator in February 2021. There’s a growing community supporting work on the open source software — used to decouple the application</p>
-          <div class="blog-info seperator"><span class="info-item">5 min read</span><span class="info-item">tag</span></div>
-          <router-link to="/blog/event" class="corner-botton blue">Read More</router-link>
-        </li>
-      </ul>
-    </main>
-  </div>
+    <div class="ctn-block reading-area blog-ctn">
+        <main class="main-content">
+            <ul class="blog-list">{{lang}}
+                <li v-for="item in jsonData" class="blog-item">
+                    <h1 class="blog-title">{{item.title}}</h1>
+                    <div class="blog-info">
+                        <span class="info-item">{{item.author}}</span>
+                        <span class="info-item sperator">|</span>
+                        <span class="info-item">{{item.createTime}}</span>
+                    </div>
+                    <p class="blog-preview">
+                        {{item.summary}}</p>
+                    <div class="blog-info seperator"><span class="info-item">{{item.readCost}}</span><span
+                            class="info-item">{{item.tag}}</span></div>
+                    <router-link :to='"/blog/event?id="+item.id' class="corner-botton blue">{{$t('message.common.read_more')}}</router-link>
+                </li>
+            </ul>
+        </main>
+    </div>
 </template>
 <style lang="less" scoped>
-  .blog-ctn {
-    .blog-item{
-      position: relative;
-      padding: 30px;
-      margin-bottom: 20px;
-      background: rgba(15,18,34,0.03);
-      border-radius: 8px;
-      .blog-preview{
-        text-align: justify;
+    .blog-ctn {
+        .blog-item {
+            position: relative;
+            padding: 30px;
+            margin-bottom: 20px;
+            background: rgba(15, 18, 34, 0.03);
+            border-radius: 8px;
+
+            .blog-preview {
+                text-align: justify;
+            }
+
+            .corner-botton {
+                position: absolute;
+                right: 30px;
+                bottom: 30px;
+            }
+        }
+    }
+</style>
+
+<script >
+  import  list_en from "./blogdata_en.js";
+  import  list_zh from "./blogdata_zh.js";
+
+  export default {
+    data() {
+      return {
+        "jsonData": null
       }
-      .corner-botton{
-        position: absolute;
-        right: 30px;
-        bottom: 30px;
+    },
+    created() {
+      const lang = localStorage.getItem('locale');
+      if (lang === "en") {
+        this.jsonData = list_en;
+      } else {
+        this.jsonData = list_zh;
       }
     }
+
   }
-</style>
\ No newline at end of file
+</script>
diff --git a/src/pages/faq/index.vue b/src/pages/faq/index.vue
index 1e50e87..a7782df 100644
--- a/src/pages/faq/index.vue
+++ b/src/pages/faq/index.vue
@@ -11,30 +11,30 @@
     </main>
   </div>
 </template>
-<style lang="less">
-  .reading-area {
-    display: flex;
-    padding: 60px 0;
-    min-height: 600px;
+<!--<style lang="less">-->
+<!--  .reading-area {-->
+<!--    display: flex;-->
+<!--    padding: 60px 0;-->
+<!--    min-height: 600px;-->
 
-    .main-content {
-      width: 1200px;
-      padding: 30px;
-    }
+<!--    .main-content {-->
+<!--      width: 1200px;-->
+<!--      padding: 30px;-->
+<!--    }-->
 
-    .side-bar {
-      flex: 1;
-      padding: 18px 0;
-      border-left: 1px solid #eaecef;
+<!--    .side-bar {-->
+<!--      flex: 1;-->
+<!--      padding: 18px 0;-->
+<!--      border-left: 1px solid #eaecef;-->
 
-      .bar-item {
-        display: block;
-        padding: 5px 18px;
-        color: #4A4A4A;
-      }
-    }
-  }
-</style>
+<!--      .bar-item {-->
+<!--        display: block;-->
+<!--        padding: 5px 18px;-->
+<!--        color: #4A4A4A;-->
+<!--      }-->
+<!--    }-->
+<!--  }-->
+<!--</style>-->
 <script setup>
   import { ref } from "vue";
 
diff --git a/src/pages/home.vue b/src/pages/home.vue
index bc76c61..8876db3 100644
--- a/src/pages/home.vue
+++ b/src/pages/home.vue
@@ -1,148 +1,80 @@
 <template>
-  <div v-if="lang === 'en'" class="ctn-block home-page">
+  <div class="ctn-block home-page">
     <div class="banner text-center">
       <h1 class="home-title"><span class="apache">Apache</span> <span class="linkis">Linkis</span> <span class="badge">Incubating</span></h1>
       <p class="home-desc">{{$t('message.home.banner.slogan')}}</p>
       <div class="botton-row center">
-        <a href="/" class="corner-botton black">Get Started</a>
-        <a href="/" class="corner-botton white">GitHub</a>
+        <a href="/#/docs/deploy/linkis" class="corner-botton black">{{$t('message.common.get_start')}}</a>
+        <a :href="systemConfiguration.github.projectUrl" class="corner-botton white">GitHub</a>
       </div>
     </div>
-    <h1 class="home-block-title text-center">Computation Governance Concept</h1>
+    <h1 class="home-block-title text-center">{{$t('message.home.introduce.title')}}</h1>
     <div class="concept home-block">
       <div class="concept-item">
-        <h3 class="concept-title">Before</h3>
-        <p class="home-paragraph">Each upper application directly connects to and accesses various underlying engines in a tightly coupled way, which makes big data platform a complex network architecture.</p>
+        <h3 class="concept-title">{{$t('message.home.introduce.before')}}</h3>
+        <p class="home-paragraph">{{$t('message.home.introduce.before_text')}}
+
+        </p>
         <img src="../assets/home/before_linkis_en.png" alt="before" class="concept-image">
       </div>
       <div class="concept-item">
-        <h3 class="concept-title">After</h3>
-        <p class="home-paragraph">Build a common layer of "computation middleware" between the numerous upper-layer applications and the countless underlying engines to resolve these complex connection problems in a standardized reusable way</p>
-        <img src="../assets/home/after_linkis_en.png" alt="before" class="concept-image">
+        <h3 class="concept-title">{{$t('message.home.introduce.after')}}</h3>
+        <p class="home-paragraph">{{$t('message.home.introduce.after_text')}}</p>
+        <img src="../assets/home/after_linkis_en.png" alt="after" class="concept-image">
       </div>
     </div>
     <div class="description home-block">
       <div class="description-content">
-        <h1 class="home-block-title">Description</h1>
-        <p class="home-paragraph">Linkis provides standardized interfaces (REST, JDBC, WebSocket etc.) to easily connect to various underlying engines (Spark, Presto, Flink, etc.), and acts as a proxy between the upper applications layer and underlying engines layer. </p>
-        <p class="home-paragraph">Linkis is able to facilitate the connectivity, governance and orchestration capabilities of different kind of engines like OLAP, OLTP (developing), Streaming, and handle all these "computation governance" affairs in a standardized reusable way.</p>
+        <h1 class="home-block-title">{{$t('message.common.description')}}</h1>
+        <p class="home-paragraph">{{$t('message.home.description.paragraph1')}}
+        </p>
+        <p class="home-paragraph">{{$t('message.home.description.paragraph2')}}
+        </p>
         <div class="botton-row">
-          <a href="/" class="corner-botton blue">Learn More</a>
+          <a href="/#/docs/architecture/DifferenceBetween1.0&0.x" class="corner-botton blue">{{$t('message.common.learn_more')}}</a>
         </div>
       </div>
       <img src="../assets/home/description.png" alt="description" class="description-image">
     </div>
-    <h1 class="home-block-title text-center">Core Features</h1>
-    <div class="features home-block">
-      <div class="feature-item">
-        <div class="item-content">
-          <h3 class="item-title">Connectivity</h3>
-          <p class="item-desc">Simplify the operation environment; decouple the upper and lower layers, which make the upper layer insensitive when bottom layers changed</p>
-        </div>
-      </div>
-      <div class="feature-item">
-        <div class="item-content">
-          <h3 class="item-title">Scalability</h3>
-          <p class="item-desc">Distributed microservice architecture with great scalability and extensibility; quickly integrate with the new underlying engine</p>
-        </div>
-      </div>
-      <div class="feature-item">
-        <div class="item-content">
-          <h3 class="item-title">Controllability</h3>
-          <p class="item-desc">Converge engine entrance, unify identity verification, high-risk prevention and control, audit records; label-based multi-level refined resource control and recovery capabilities</p>
-        </div>
-      </div>
-      <div class="feature-item">
-        <div class="item-content">
-          <h3 class="item-title">Orchestration</h3>
-          <p class="item-desc">Computing strategy design based on active-active, mixed computing, transcation Orchestrator Service</p>
-        </div>
-      </div>
-      <div class="feature-item">
-        <div class="item-content">
-          <h3 class="item-title">Reusability</h3>
-          <p class="item-desc">Highly reduced the back-end development workload of upper-level applications development; Swiftly and efficiently build a data platform tool suite based on Linkis</p>
-        </div>
-      </div>
-    </div>
-    <h1 class="home-block-title text-center">Our Users</h1>
-    <div class="show-case home-block">
-      <div class="case-item"></div>
-      <div class="case-item"></div>
-      <div class="case-item"></div>
-      <div class="case-item"></div>
-      <div class="case-item"></div>
-      <div class="case-item"></div>
-      <div class="case-item"></div>
-    </div>
-  </div>
-  <div v-else class="ctn-block home-page">
-    <div class="banner text-center">
-      <h1 class="home-title"><span class="apache">Apache</span> <span class="linkis">Linkis</span> <span class="badge">Incubating</span></h1>
-      <p class="home-desc">{{$t('message.home.banner.slogan')}}</p>
-      <div class="botton-row center">
-        <a href="/" class="corner-botton black">开始</a>
-        <a href="/" class="corner-botton white">GitHub</a>
-      </div>
-    </div>
-    <h1 class="home-block-title text-center">计算治理理念</h1>
-    <div class="concept home-block">
-      <div class="concept-item">
-        <h3 class="concept-title">没有Linkis之前</h3>
-        <p class="home-paragraph">每个上层应用以紧耦合的方式直接连接和访问各种底层引擎,这使得大数据平台成为一个复杂的网络架构</p>
-        <img src="../assets/home/before_linkis_zh.png" alt="before" class="concept-image">
-      </div>
-      <div class="concept-item">
-        <h3 class="concept-title">有Linkis之后</h3>
-        <p class="home-paragraph">在丰富的上层应用和丰富的底层引擎之间构建一个公共的“计算中间件”层,以标准化的可复用方式解决这些复杂的连接问题</p>
-       <img src="../assets/home/after_linkis_zh.png" alt="before" class="concept-image">
-      </div>
-    </div>
-    <div class="description home-block">
-      <div class="description-content">
-        <h1 class="home-block-title">描述</h1>
-        <p class="home-paragraph">Linkis 提供标准化接口(REST、JDBC、WebSocket 等),方便连接各种底层引擎(Spark、Presto、Flink 等),充当上层应用层和底层引擎层之间的代理</p>
-        <p class="home-paragraph">Linkis 能够促进 OLAP、OLTP(开发)、Streaming 等不同类型引擎的连接、治理和编排能力,并以标准化的可重用方式处理所有这些“计算治理”事务.</p>
-        <div class="botton-row">
-          <a href="/" class="corner-botton blue">了解更多</a>
-        </div>
-      </div>
-     <img src="../assets/home/description.png" alt="description" class="description-image">
-    </div>
-    <h1 class="home-block-title text-center">核心功能</h1>
+    <h1 class="home-block-title text-center">{{$t('message.common.core_features')}}</h1>
     <div class="features home-block">
       <div class="feature-item">
         <div class="item-content">
-          <h3 class="item-title">连通性</h3>
-          <p class="item-desc">简化操作环境;上层和下层解耦,使上层在底层变化时不敏感</p>
+          <h3 class="item-title">{{$t('message.common.connectivity')}}</h3>
+          <p class="item-desc">{{$t('message.home.core.connectivity')}}
+          </p>
         </div>
       </div>
       <div class="feature-item">
         <div class="item-content">
-          <h3 class="item-title">可扩展性</h3>
-          <p class="item-desc">分布式微服务架构,具有很好的可伸缩性和扩展性;快速与新的底层引擎集成</p>
+          <h3 class="item-title">{{$t('message.common.scalability')}}</h3>
+          <p class="item-desc">{{$t('message.home.core.scalability')}}
+          </p>
         </div>
       </div>
       <div class="feature-item">
         <div class="item-content">
-          <h3 class="item-title">可控性</h3>
-          <p class="item-desc">融合引擎入口,统一身份验证,高风险防控,审计记录;基于标签的多级精细化资源控制和恢复能力</p>
+          <h3 class="item-title">{{$t('message.common.controllability')}}</h3>
+          <p class="item-desc">{{$t('message.home.core.controllability')}}
+          </p>
         </div>
       </div>
       <div class="feature-item">
         <div class="item-content">
-          <h3 class="item-title">编排</h3>
-          <p class="item-desc">基于双活、混合计算、事务Orchestrator服务的计算策略设计</p>
+          <h3 class="item-title">{{$t('message.common.orchestration')}}</h3>
+          <p class="item-desc">{{$t('message.home.core.orchestration')}}
+          </p>
         </div>
       </div>
       <div class="feature-item">
         <div class="item-content">
-          <h3 class="item-title">可复用性</h3>
-          <p class="item-desc">大大减少了上层应用开发的后端开发工作量;可基于Linkis快速高效搭建数据平台工具套件</p>
+          <h3 class="item-title">{{$t('message.common.reusability')}}</h3>
+          <p class="item-desc">{{$t('message.home.core.reusability')}}
+          </p>
         </div>
       </div>
     </div>
-    <h1 class="home-block-title text-center">我们的用户</h1>
+    <h1 class="home-block-title text-center">{{$t('message.common.our_users')}}</h1>
     <div class="show-case home-block">
       <div class="case-item"></div>
       <div class="case-item"></div>
@@ -289,6 +221,7 @@
 </style>
 <script setup>
   import { ref } from "vue"
+  import  systemConfiguration from  "../js/config"
   // 初始化语言
   const lang = ref(localStorage.getItem('locale') || 'en');
 </script>
diff --git a/src/pages/team.vue b/src/pages/team.vue
deleted file mode 100644
index 1c6e49e..0000000
--- a/src/pages/team.vue
+++ /dev/null
@@ -1,195 +0,0 @@
-<template>
-  <div class="ctn-block team-page">
-    <h3 class="team-title">PMC</h3>
-    <p class="team-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in Github release page.</p>
-    <ul class="character-list">
-      <li class="character-item text-center">
-        <div class="character-avatar"></div>
-        <div class="character-desc">
-          <h3 class="character-name">lululu</h3>
-          <a href="" class="character-link">@lululu</a>
-        </div>
-      </li>
-      <li class="character-item text-center">
-        <div class="character-avatar"></div>
-        <div class="character-desc">
-          <h3 class="character-name">lululu</h3>
-          <a href="" class="character-link">@lululu</a>
-        </div>
-      </li>
-      <li class="character-item text-center">
-        <div class="character-avatar"></div>
-        <div class="character-desc">
-          <h3 class="character-name">lululu</h3>
-          <a href="" class="character-link">@lululu</a>
-        </div>
-      </li>
-      <li class="character-item text-center">
-        <div class="character-avatar"></div>
-        <div class="character-desc">
-          <h3 class="character-name">lululu</h3>
-          <a href="" class="character-link">@lululu</a>
-        </div>
-      </li>
-      <li class="character-item text-center">
-        <div class="character-avatar"></div>
-        <div class="character-desc">
-          <h3 class="character-name">lululu</h3>
-          <a href="" class="character-link">@lululu</a>
-        </div>
-      </li>
-      <li class="character-item text-center">
-        <div class="character-avatar"></div>
-        <div class="character-desc">
-          <h3 class="character-name">lululu</h3>
-          <a href="" class="character-link">@lululu</a>
-        </div>
-      </li>
-      <li class="character-item text-center">
-        <div class="character-avatar"></div>
-        <div class="character-desc">
-          <h3 class="character-name">lululu</h3>
-          <a href="" class="character-link">@lululu</a>
-        </div>
-      </li>
-    </ul>
-    <h3 class="team-title">Committer</h3>
-    <p class="team-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in Github release page.</p>
-    <ul class="character-list committer">
-      <li class="character-item">
-        <div class="character-avatar"></div>
-        <div class="character-desc">
-          <h3 class="character-name">lululu</h3>
-          <a href="" class="character-link">@lululu</a>
-        </div>
-      </li>
-      <li class="character-item">
-        <div class="character-avatar"></div>
-        <div class="character-desc">
-          <h3 class="character-name">lululu</h3>
-          <a href="" class="character-link">@lululu</a>
-        </div>
-      </li>
-      <li class="character-item">
-        <div class="character-avatar"></div>
-        <div class="character-desc">
-          <h3 class="character-name">lululu</h3>
-          <a href="" class="character-link">@lululu</a>
-        </div>
-      </li>
-      <li class="character-item">
-        <div class="character-avatar"></div>
-        <div class="character-desc">
-          <h3 class="character-name">lululu</h3>
-          <a href="" class="character-link">@lululu</a>
-        </div>
-      </li>
-      <li class="character-item">
-        <div class="character-avatar"></div>
-        <div class="character-desc">
-          <h3 class="character-name">lululu</h3>
-          <a href="" class="character-link">@lululu</a>
-        </div>
-      </li>
-      <li class="character-item">
-        <div class="character-avatar"></div>
-        <div class="character-desc">
-          <h3 class="character-name">lululu</h3>
-          <a href="" class="character-link">@lululu</a>
-        </div>
-      </li>
-    </ul>
-    <h3 class="team-title">Contributors</h3>
-    <p class="team-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in Github release page.</p>
-    <ul class="contributor-list">
-      <li class="contributor-item">apache/apisix-go-plugin-runner</li>
-    </ul>
-  </div>
-</template>
-<style lang="less" scoped>
-@import url('/src/style/variable.less');
-.team-page{
-  padding-top: 60px;
-  .team-title{
-    font-size: 24px;
-    line-height: 34px;
-  }
-  .team-desc{
-    color: @enhance-color;
-    font-weight: 400;
-  }
-  .contributor-list{
-    padding: 20px 0 40px;
-    .contributor-item{
-      display: inline-block;
-      margin-right: 20px;
-      margin-bottom: 20px;
-      padding: 16px 16px 16px 48px;
-      background-size: 24px;
-      background-position: 16px center;
-      background-repeat: no-repeat;
-      color: @enhance-color;
-      border: 1px solid rgba(15,18,34,0.20);
-      border-radius: 4px;
-      &:last-child{
-        margin-right: 0;
-      }
-    }
-  }
-  .character-list {
-    display: grid;
-    grid-template-columns: repeat(6, 1fr);
-    grid-column-gap: 20px;
-    grid-row-gap: 20px;
-    padding: 20px 0 60px;
-    &.committer{
-      grid-template-columns: repeat(5, 224px);
-      .character-item{
-        display: flex;
-        padding: 20px;
-        align-items: center;
-        .character-avatar{
-          width: 60px;
-          height: 60px;
-          margin: 0;
-        }
-        .character-desc{
-          flex: 1;
-          padding-left: 16px;
-          min-width: 0;
-        }
-      }
-    }
-    .character-item{
-      border: 1px solid rgba(15,18,34,0.20);
-      border-radius: 4px;
-      // 辅助处理文字溢出
-      min-width: 0;
-      padding: 0 20px 20px;
-      .character-avatar{
-        width: 120px;
-        height: 120px;
-        margin: 30px auto 10px;
-        background: #D8D8D8;
-        border-radius: 50%;
-      }
-      .character-name{
-        color: @enhance-color;
-        line-height: 24px;
-        font-size: 16px;
-        white-space: nowrap;
-        overflow: hidden;
-        text-overflow: ellipsis;
-      }
-      .character-link{
-        color: rgba(15,18,34,0.65);
-        font-weight: 400;
-        white-space: nowrap;
-        overflow: hidden;
-        text-overflow: ellipsis;
-      }
-    }
-  }
-}
-</style>
-
diff --git a/src/pages/team/team.vue b/src/pages/team/team.vue
new file mode 100644
index 0000000..1dd51a1
--- /dev/null
+++ b/src/pages/team/team.vue
@@ -0,0 +1,140 @@
+<template>
+  <div class="ctn-block team-page">
+    <p>
+      {{jsonData.info.desc}}
+    </p>
+
+    <h3 class="team-title">PMC</h3>
+<!--    <p class="team-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in Github release page.</p>-->
+    <ul  class="character-list">
+      <li v-for="item in jsonData.list" class="character-item text-center">
+        <img class="character-avatar" :src="item.avatarUrl" :alt="item.name"/>
+        <div class="character-desc">
+          <h3 class="character-name"><a href="{{utils.concatStr('https://github.com/','',item.githubId)}}" class="character-name">{{item.name}}</a></h3>
+        </div>
+      </li>
+    </ul>
+    <p v-html="jsonData.info.tip"></p>
+    <!--   <h3 class="team-title">Contributors</h3>
+     <p class="team-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in Github release page.</p>
+    ]<ul class="contributor-list">
+      <li class="contributor-item">apache/apisix-go-plugin-runner</li>
+     </ul>-->
+  </div>
+</template>
+
+<script >
+   import utils  from "../../js/utils";
+   import  list_en from "./teamdata_en.js";
+   import  list_zh from "./teamdata_zh.js";
+
+    export default {
+        data() {
+            return {
+                utils,
+                "jsonData": null
+            }
+        },
+        created() {
+            const lang = localStorage.getItem('locale');
+            if (lang === "en") {
+                this.jsonData = list_en;
+            } else {
+                this.jsonData = list_zh;
+            }
+        }
+
+    }
+</script>
+
+
+
+<style lang="less" scoped>
+@import url('/src/style/variable.less');
+.team-page{
+  padding-top: 60px;
+  .team-title{
+    font-size: 24px;
+    line-height: 34px;
+  }
+  .team-desc{
+    color: @enhance-color;
+    font-weight: 400;
+  }
+  .contributor-list{
+    padding: 20px 0 40px;
+    .contributor-item{
+      display: inline-block;
+      margin-right: 20px;
+      margin-bottom: 20px;
+      padding: 16px 16px 16px 48px;
+      background-size: 24px;
+      background-position: 16px center;
+      background-repeat: no-repeat;
+      color: @enhance-color;
+      border: 1px solid rgba(15,18,34,0.20);
+      border-radius: 4px;
+      &:last-child{
+        margin-right: 0;
+      }
+    }
+  }
+  .character-list {
+    display: grid;
+    grid-template-columns: repeat(6, 1fr);
+    grid-column-gap: 20px;
+    grid-row-gap: 20px;
+    padding: 20px 0 60px;
+    &.committer{
+      grid-template-columns: repeat(5, 224px);
+      .character-item{
+        display: flex;
+        padding: 20px;
+        align-items: center;
+        .character-avatar{
+          width: 60px;
+          height: 60px;
+          margin: 0;
+        }
+        .character-desc{
+          flex: 1;
+          padding-left: 16px;
+          min-width: 0;
+        }
+      }
+    }
+    .character-item{
+      border: 1px solid rgba(15,18,34,0.20);
+      border-radius: 4px;
+      // 辅助处理文字溢出
+      min-width: 0;
+      padding: 0 20px 20px;
+      .character-avatar{
+        width: 120px;
+        height: 120px;
+        margin: 30px auto 10px;
+        background: #D8D8D8;
+        border-radius: 50%;
+      }
+      .character-name{
+        color: @enhance-color;
+        line-height: 24px;
+        font-size: 16px;
+        white-space: nowrap;
+        overflow: hidden;
+        text-overflow: ellipsis;
+      }
+      .character-link{
+        color: rgba(15,18,34,0.65);
+        font-weight: 400;
+        white-space: nowrap;
+        overflow: hidden;
+        text-overflow: ellipsis;
+      }
+    }
+  }
+}
+</style>
+
+
+
diff --git a/src/pages/team/teamdata_en.js b/src/pages/team/teamdata_en.js
new file mode 100644
index 0000000..e61ddb0
--- /dev/null
+++ b/src/pages/team/teamdata_en.js
@@ -0,0 +1,131 @@
+const data = {
+    info: {
+        desc: "The Linkis team is comprised of Members and Contributors. Members have direct access to the source of Linkis project and actively evolve the code-base. Contributors improve the project through submission of patches and suggestions to the Members. The number of Contributors to the project is unbounded. All contributions to Linkis are greatly appreciated, whether for trivial cleanups, big new features or other material rewards.",
+        tip: "If you want to contribute, you can go directly to the <a href=\"https://github.com/apache/incubator-linkis/\" target=\"_blank\" rel=\"noopener noreferrer\">Apache Linkis</a> and fork it."
+    },
+    list: [
+        {
+            "name": "Shuai Di",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/11204592?s=60&v=4",
+            "githubId": "sargentti",
+            "gitUrl": "https://github.com/sargentti",
+            "apacheId": "",
+            "email": "shuaidi1024@gmail.com",
+        },
+        {
+            "name": "Qiang Yin",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/13635442?s=60&v=4",
+            "githubId": "wushengyeyouya",
+            "gitUrl": "https://github.com/wushengyeyouya",
+            "apacheId": "",
+            "email": "enjoyyin91@gmail.com",
+        },
+        {
+            "name": "Heping Wang",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/11496700?s=60&v=4",
+            "githubId": "peacewong",
+            "gitUrl": "https://github.com/peacewong",
+            "apacheId": "",
+            "email": "wpeace1212@gmail.com",
+        },
+        {
+            "name": "Yongkun Yang",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/11203920?s=60&v=4",
+            "githubId": "Alexkun",
+            "gitUrl": "https://github.com/Alexkun",
+            "apacheId": "",
+            "email": "wimkunkun@gmail.com",
+        },
+        {
+            "name": "Zhiyue Yang",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/26363549?s=60&v=4",
+            "githubId": "yangzhiyue",
+            "gitUrl": "https://github.com/yangzhiyue",
+            "apacheId": "",
+            "email": "zjyzy19920513@gmail.com",
+        },
+        {
+            "name": "You Liu",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/12731931?s=60&v=4",
+            "githubId": "liuyou2",
+            "gitUrl": "https://github.com/liuyou2",
+            "apacheId": "",
+            "email": "liuyou181020@gmail.com",
+        },
+        {
+            "name": "Deyi Hua",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/13026864?s=60&v=4",
+            "githubId": "Davidhua1996",
+            "gitUrl": "https://github.com/Davidhua1996",
+            "apacheId": "",
+            "email": "david_hua1996@gmail.com",
+        },
+        {
+            "name": "Le Bai",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/13026864?s=60&v=4",
+            "githubId": "leeebai",
+            "gitUrl": "https://github.com/leeebai",
+            "apacheId": "",
+            "email": "blgg931026@gmail.com",
+        },
+        {
+            "name": "Xiaogang Wang",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/39912100?s=60&v=4",
+            "githubId": "?",
+            "gitUrl": "https://github.com/?",
+            "apacheId": "",
+            "email": "Adamyuanyuan@gmail.com",
+        },
+        {
+            "name": "Hui Zhu",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/39912100?s=60&v=4",
+            "githubId": "?",
+            "gitUrl": "https://github.com/?",
+            "apacheId": "",
+            "email": "huashuizhuhui@gmail.com",
+        },
+        {
+            "name": "Zhen Wang",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/39912100?s=60&v=4",
+            "githubId": "?",
+            "gitUrl": "https://github.com/?",
+            "apacheId": "",
+            "email": "wangzhen077@gmail.com",
+        },
+        {
+            "name": "Rong Zhang",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/39478871?s=60&v=4",
+            "githubId": "?",
+            "gitUrl": "https://github.com/brianzhangrong",
+            "apacheId": "",
+            "email": "brian.rongzhang@gmail.com",
+        },
+        {
+            "name": "Xiaohua Yi",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/39478871?s=60&v=4",
+            "githubId": "?",
+            "gitUrl": "https://github.com/?",
+            "apacheId": "",
+            "email": "yixiaohuamax@gmail.com",
+        },
+        {
+            "name": "Ke Zhou",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/5548534?s=60&v=4",
+            "githubId": "?",
+            "gitUrl": "https://github.com/?",
+            "apacheId": "",
+            "email": "bleachzk@gmail.com",
+        },
+
+        {
+            "name": "Jian Xie",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/5548534?s=60&v=4",
+            "githubId": "?",
+            "gitUrl": "https://github.com/?",
+            "apacheId": "",
+            "email": "Jackyxxie@gmail.com",
+        }
+    ]
+}
+
+export default data
diff --git a/src/pages/team/teamdata_zh.js b/src/pages/team/teamdata_zh.js
new file mode 100644
index 0000000..75a96a5
--- /dev/null
+++ b/src/pages/team/teamdata_zh.js
@@ -0,0 +1,131 @@
+const data = {
+    info: {
+        desc: "Linkis 团队由成员和贡献者组成。 成员可以直接访问 Linkis 项目的源代码并积极开发代码库。 贡献者通过提交补丁和向成员提供建议来改进项目。 项目的贡献者数量不限。 非常感谢对 Linkis 的所有贡献,无论是琐碎的修改或清理、重大的新特性新功能,还是其他的物质奖励。",
+        tip:  '如果你想参与贡献,可以直接去<a href="https://github.com/apache/incubator-linkis" target="_blank" rel="noopener noreferrer" >Apache Linkis</a> 并fork.'
+    },
+    list: [
+        {
+            "name": "邸帅",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/11204592?s=60&v=4",
+            "githubId": "sargentti",
+            "gitUrl": "https://github.com/sargentti",
+            "apacheId": "",
+            "email": "shuaidi1024@gmail.com",
+        },
+        {
+            "name": "尹强",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/13635442?s=60&v=4",
+            "githubId": "wushengyeyouya",
+            "gitUrl": "https://github.com/wushengyeyouya",
+            "apacheId": "",
+            "email": "enjoyyin91@gmail.com",
+        },
+        {
+            "name": "王和平",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/11496700?s=60&v=4",
+            "githubId": "peacewong",
+            "gitUrl": "https://github.com/peacewong",
+            "apacheId": "",
+            "email": "wpeace1212@gmail.com",
+        },
+        {
+            "name": "杨永坤",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/11203920?s=60&v=4",
+            "githubId": "Alexkun",
+            "gitUrl": "https://github.com/Alexkun",
+            "apacheId": "",
+            "email": "wimkunkun@gmail.com",
+        },
+        {
+            "name": "杨峙岳",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/26363549?s=60&v=4",
+            "githubId": "yangzhiyue",
+            "gitUrl": "https://github.com/yangzhiyue",
+            "apacheId": "",
+            "email": "zjyzy19920513@gmail.com",
+        },
+        {
+            "name": "刘有",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/12731931?s=60&v=4",
+            "githubId": "liuyou2",
+            "gitUrl": "https://github.com/liuyou2",
+            "apacheId": "",
+            "email": "liuyou181020@gmail.com",
+        },
+        {
+            "name": "华德义",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/13026864?s=60&v=4",
+            "githubId": "Davidhua1996",
+            "gitUrl": "https://github.com/Davidhua1996",
+            "apacheId": "",
+            "email": "david_hua1996@gmail.com",
+        },
+        {
+            "name": "白乐",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/13026864?s=60&v=4",
+            "githubId": "leeebai",
+            "gitUrl": "https://github.com/leeebai",
+            "apacheId": "",
+            "email": "blgg931026@gmail.com",
+        },
+        {
+            "name": "王小刚",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/39912100?s=60&v=4",
+            "githubId": "?",
+            "gitUrl": "https://github.com/?",
+            "apacheId": "",
+            "email": "Adamyuanyuan@gmail.com",
+        },
+        {
+            "name": "Hui Zhu",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/39912100?s=60&v=4",
+            "githubId": "?",
+            "gitUrl": "https://github.com/?",
+            "apacheId": "",
+            "email": "huashuizhuhui@gmail.com",
+        },
+        {
+            "name": "Zhen Wang",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/39912100?s=60&v=4",
+            "githubId": "?",
+            "gitUrl": "https://github.com/?",
+            "apacheId": "",
+            "email": "wangzhen077@gmail.com",
+        },
+        {
+            "name": "Rong Zhang",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/39478871?s=60&v=4",
+            "githubId": "?",
+            "gitUrl": "https://github.com/brianzhangrong",
+            "apacheId": "",
+            "email": "brian.rongzhang@gmail.com",
+        },
+        {
+            "name": "Xiaohua Yi",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/39478871?s=60&v=4",
+            "githubId": "?",
+            "gitUrl": "https://github.com/?",
+            "apacheId": "",
+            "email": "yixiaohuamax@gmail.com",
+        },
+        {
+            "name": "周可",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/5548534?s=60&v=4",
+            "githubId": "?",
+            "gitUrl": "https://github.com/?",
+            "apacheId": "",
+            "email": "bleachzk@gmail.com",
+        },
+
+        {
+            "name": "谢建",
+            "avatarUrl": "https://avatars.githubusercontent.com/u/5548534?s=60&v=4",
+            "githubId": "?",
+            "gitUrl": "https://github.com/?",
+            "apacheId": "",
+            "email": "Jackyxxie@gmail.com",
+        }
+    ]
+}
+
+export default data
diff --git a/src/router.js b/src/router.js
index a803698..50891a8 100644
--- a/src/router.js
+++ b/src/router.js
@@ -84,7 +84,7 @@ const routes = [{
   {
     path: '/team',
     name: 'team',
-    component: () => import( /* webpackChunkName: "group-team" */ './pages/team.vue')
+    component: () => import( /* webpackChunkName: "group-team" */ './pages/team/team.vue')
   },
 ]
 

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 34/50: ADD: 增加下载页面

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 51df1b90a53293ee0a8dc0e8bc32f8f9031b31a4
Author: lucaszhu <lu...@webank.com>
AuthorDate: Mon Oct 18 14:51:11 2021 +0800

    ADD: 增加下载页面
---
 src/App.vue             |  2 +-
 src/pages/download.vue  | 63 ++++++++++++++++++++++++++++++++++++++++++++++++-
 src/pages/team/team.vue | 14 +----------
 src/style/base.less     | 12 ++++++++++
 4 files changed, 76 insertions(+), 15 deletions(-)

diff --git a/src/App.vue b/src/App.vue
index 36766e8..e13c9cd 100644
--- a/src/App.vue
+++ b/src/App.vue
@@ -74,7 +74,7 @@
                         <a href="http://www.apache.org/foundation/thanks.html" class="links-item">{{$t('menu.links.thanks')}}</a>
                     </div>
                 </div>
-                <img src="./assets/image/incubator-logo.png" alt="incubator-logo" class="incubator-logo">
+                <img src="/src/assets/image/incubator-logo.png" alt="incubator-logo" class="incubator-logo">
                 <p class="footer-desc">Apache Linkis (Incubating) is an effort undergoing incubation at The Apache
                     Software Foundation, sponsored by the Apache Incubator. Incubation is required of all newly accepted
                     projects until a further review indicates that the infrastructure, communications, and decision
diff --git a/src/pages/download.vue b/src/pages/download.vue
index 35a96c7..025cbdd 100644
--- a/src/pages/download.vue
+++ b/src/pages/download.vue
@@ -1,3 +1,64 @@
 <template>
-  <div>download</div>
+  <div class="ctn-block normal-page download-page">
+    <h3 class="team-title">Download</h3>
+    <p class="team-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in <a class="desc-link" href="">Github release page</a></p>
+    <ul class="download-list">
+      <li class="download-item">
+        <h3 class="item-title"><span>Linkis-1.0.2</span><span><span class="release-date">Release Date: </span>2021-9-2</span></h3>
+        <p class="item-desc">This release mainly introduces Flink-support into Linkis ecosystem.</p>
+        <ul class="item-info">
+          <li class="info-tag">New Features <span class="nums">6</span></li>
+          <li class="info-tag">Enhancement <span class="nums">6</span></li>
+          <li class="info-tag">BUG fixs <span class="nums">6</span></li>
+          <li class="info-tag">Changelog</li>
+        </ul>
+        <a href="" class="corner-botton blue">Download</a>
+      </li>
+    </ul>
+  </div>
 </template>
+<style lang="less" scoped>
+@import url('/src/style/variable.less');
+.download-page{
+  .download-list{
+    padding: 40px 0;
+    .download-item{
+      position: relative;
+      padding: 30px;
+      margin-bottom: 20px;
+      border: 1px solid rgba(15,18,34,0.20);
+      border-radius: 8px;
+      font-size: 16px;
+      .item-title{
+        display: flex;
+        justify-content: space-between;
+        font-size: 24px;
+        line-height: 34px;
+        .release-date{
+          color: rgba(15,18,34,0.45);
+          font-weight: 400;
+        }
+      }
+      .item-desc{
+        padding: 10px 0 40px;
+      }
+      .corner-botton{
+        position: absolute;
+        right: 30px;
+        bottom: 30px;
+      }
+      .item-info{
+        display: flex;
+        color: rgba(15,18,34,0.45);
+        line-height: 22px;
+        .info-tag{
+          padding-right: 30px;
+          .nums{
+            color: @enhance-color;
+          }
+        }
+      }
+    }
+  }
+}
+</style>
diff --git a/src/pages/team/team.vue b/src/pages/team/team.vue
index eb53843..b1286b6 100644
--- a/src/pages/team/team.vue
+++ b/src/pages/team/team.vue
@@ -1,5 +1,5 @@
 <template>
-  <div class="ctn-block team-page">
+  <div class="ctn-block normal-page team-page">
     <h3 class="team-title">PMC</h3>
     <p class="team-desc">{{jsonData.info.desc}}</p>
     <ul  class="character-list">
@@ -42,21 +42,9 @@
 
     }
 </script>
-
-
-
 <style lang="less" scoped>
 @import url('/src/style/variable.less');
 .team-page{
-  padding-top: 60px;
-  .team-title{
-    font-size: 24px;
-    line-height: 34px;
-  }
-  .team-desc{
-    color: @enhance-color;
-    font-weight: 400;
-  }
   .contributor-list{
     padding: 20px 0 40px;
     .contributor-item{
diff --git a/src/style/base.less b/src/style/base.less
index 1db54d5..7e6cd4c 100644
--- a/src/style/base.less
+++ b/src/style/base.less
@@ -125,3 +125,15 @@ a:visited {
     border: 1px solid #1A529C;
   }
 }
+
+.normal-page{
+  padding-top: 60px;
+  .normal-title{
+    font-size: 24px;
+    line-height: 34px;
+  }
+  .normal-desc{
+    color: @enhance-color;
+    font-weight: 400;
+  }
+}

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 48/50: bugfix for introduction

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit bf2352f84c95a17fe279e3066fa2f1e9aa1739c3
Author: casionone <ca...@gmail.com>
AuthorDate: Thu Oct 28 20:04:18 2021 +0800

    bugfix for introduction
---
 assets/404.f24f37c0.js                                      |   1 -
 ...-manager-03.5aaff6ed.png => app-manager-01.5aaff6ed.png} | Bin
 assets/app_manager.bed25273.js                              |   2 +-
 assets/{download.4f121175.js => download.65cfe27b.js}       |   2 +-
 assets/{event.b677bf34.js => event.c4950b6a.js}             |   2 +-
 assets/index.83dab580.js                                    |   1 +
 assets/index.c319b82e.js                                    |   1 -
 assets/{index.ba4cbe23.js => index.dac2c111.js}             |   2 +-
 assets/{linkis.cdbb993f.js => linkis.513065ec.js}           |   2 +-
 assets/manager.6973d707.js                                  |   2 +-
 index.html                                                  |   2 +-
 11 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/assets/404.f24f37c0.js b/assets/404.f24f37c0.js
deleted file mode 100644
index db61db0..0000000
--- a/assets/404.f24f37c0.js
+++ /dev/null
@@ -1 +0,0 @@
-import{r as l,o as n,c as a,b as u}from"./vendor.1180558b.js";const o={key:0,class:"ctn-block normal-page"},e=[u("h1",null,"Sorry,Page Not Found!!!",-1),u("br",null,null,-1),u("p",null,"You can contact us via email(dev@linkis.incubator.apache.org) or submitting an issue on github",-1),u("br",null,null,-1)],r={key:1,class:"ctn-block normal-page"},s=[u("h1",null,"抱歉,请求的资源未找到!!!",-1),u("br",null,null,-1),u("p",null,"您可以通过邮件(dev@linkis.incubator.apache.org)告知我们或则通过github提交issue.",-1),u("br", [...]
diff --git a/assets/app-manager-03.5aaff6ed.png b/assets/app-manager-01.5aaff6ed.png
similarity index 100%
rename from assets/app-manager-03.5aaff6ed.png
rename to assets/app-manager-01.5aaff6ed.png
diff --git a/assets/app_manager.bed25273.js b/assets/app_manager.bed25273.js
index 0c1d272..cc9ce3e 100644
--- a/assets/app_manager.bed25273.js
+++ b/assets/app_manager.bed25273.js
@@ -1 +1 @@
-import{o as e,c as n,b as i,e as a,r as t,l as r,u as o}from"./vendor.1180558b.js";var s="/assets/app-manager-03.5aaff6ed.png",g="/assets/app-manager-02.2aff8a98.png";const l={class:"markdown-body"},c=[i("h2",null,"1. Background",-1),i("p",null,"        The Entrance module of the old version of Linkis is responsible for too much responsibilities, the management ability of the Engine is weak, and it is not easy to follow-up expansion, the AppManager module is newly extracted to complete t [...]
+import{o as e,c as n,b as i,e as a,r as t,l as r,u as o}from"./vendor.1180558b.js";var s="/assets/app-manager-01.5aaff6ed.png",g="/assets/app-manager-02.2aff8a98.png";const l={class:"markdown-body"},c=[i("h2",null,"1. Background",-1),i("p",null,"        The Entrance module of the old version of Linkis is responsible for too much responsibilities, the management ability of the Engine is weak, and it is not easy to follow-up expansion, the AppManager module is newly extracted to complete t [...]
diff --git a/assets/download.4f121175.js b/assets/download.65cfe27b.js
similarity index 95%
rename from assets/download.4f121175.js
rename to assets/download.65cfe27b.js
index c4ac319..5409057 100644
--- a/assets/download.4f121175.js
+++ b/assets/download.65cfe27b.js
@@ -1 +1 @@
-import{u as e}from"./utils.7ca2fb6d.js";import{s}from"./index.c319b82e.js";import{_ as a}from"./plugin-vue_export-helper.5a098b48.js";import{o as n,c as t,b as l,F as i,k as o,p as r,j as c,t as h,e as m}from"./vendor.1180558b.js";const u={info:{desc:'Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in <a class="link" target="_blank" href="'+s.github.projectReleaseUrl+'">Github release page</a></p>'},list:[{version:"1.0.2",releaseDate:"2021 [...]
+import{u as e}from"./utils.7ca2fb6d.js";import{s}from"./index.83dab580.js";import{_ as a}from"./plugin-vue_export-helper.5a098b48.js";import{o as n,c as t,b as l,F as i,k as o,p as r,j as c,t as h,e as m}from"./vendor.1180558b.js";const u={info:{desc:'Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in <a class="link" target="_blank" href="'+s.github.projectReleaseUrl+'">Github release page</a></p>'},list:[{version:"1.0.2",releaseDate:"2021 [...]
diff --git a/assets/event.b677bf34.js b/assets/event.c4950b6a.js
similarity index 54%
rename from assets/event.b677bf34.js
rename to assets/event.c4950b6a.js
index 055b87e..92efe73 100644
--- a/assets/event.b677bf34.js
+++ b/assets/event.c4950b6a.js
@@ -1 +1 @@
-import{_ as o}from"./index.c319b82e.js";import{_ as e}from"./plugin-vue_export-helper.5a098b48.js";import{q as n,o as t,c as r,b as a,l as s,s as c}from"./vendor.1180558b.js";const i={class:"ctn-block reading-area blog-ctn"},l={class:"main-content"};var m=e({computed:{optionComponent(){const e="./"+this.$route.query.id+"_"+("en"==localStorage.getItem("locale")?"en":"zh")+".md";return console.log(e),n((()=>o((()=>import(e)),[])))}}},[["render",function(o,e,n,m,p,d){return t(),r("div",i,[a [...]
+import{_ as o}from"./index.83dab580.js";import{_ as e}from"./plugin-vue_export-helper.5a098b48.js";import{q as n,o as t,c as a,b as r,l as s,s as i}from"./vendor.1180558b.js";const c={class:"ctn-block reading-area blog-ctn"},l={class:"main-content"};var m=e({computed:{optionComponent(){const e="./"+this.$route.query.id+"_"+("en"==localStorage.getItem("locale")?"en":"zh")+".md";return console.log(e),n((()=>o((()=>import(e)),[])))}}},[["render",function(o,e,n,m,p,d){return t(),a("div",c,[r [...]
diff --git a/assets/index.83dab580.js b/assets/index.83dab580.js
new file mode 100644
index 0000000..5168976
--- /dev/null
+++ b/assets/index.83dab580.js
@@ -0,0 +1 @@
+import{r as e,a as n,o as t,c as a,b as i,d as o,w as r,e as s,t as c,n as l,u as m,f as h,g as p,h as u,i as d}from"./vendor.1180558b.js";!function(){const e=document.createElement("link").relList;if(!(e&&e.supports&&e.supports("modulepreload"))){for(const e of document.querySelectorAll('link[rel="modulepreload"]'))n(e);new MutationObserver((e=>{for(const t of e)if("childList"===t.type)for(const e of t.addedNodes)"LINK"===e.tagName&&"modulepreload"===e.rel&&n(e)})).observe(document,{chi [...]
diff --git a/assets/index.c319b82e.js b/assets/index.c319b82e.js
deleted file mode 100644
index d9a56d6..0000000
--- a/assets/index.c319b82e.js
+++ /dev/null
@@ -1 +0,0 @@
-import{r as e,a as n,o as t,c as a,b as i,d as o,w as r,e as s,t as c,n as l,u as m,f as h,g as p,h as u,i as d}from"./vendor.1180558b.js";!function(){const e=document.createElement("link").relList;if(!(e&&e.supports&&e.supports("modulepreload"))){for(const e of document.querySelectorAll('link[rel="modulepreload"]'))n(e);new MutationObserver((e=>{for(const t of e)if("childList"===t.type)for(const e of t.addedNodes)"LINK"===e.tagName&&"modulepreload"===e.rel&&n(e)})).observe(document,{chi [...]
diff --git a/assets/index.ba4cbe23.js b/assets/index.dac2c111.js
similarity index 99%
rename from assets/index.ba4cbe23.js
rename to assets/index.dac2c111.js
index 6adeaa3..1ca4f7d 100644
--- a/assets/index.ba4cbe23.js
+++ b/assets/index.dac2c111.js
@@ -1 +1 @@
-import{s}from"./index.c319b82e.js";import{_ as e}from"./plugin-vue_export-helper.5a098b48.js";import{r as A,o as a,c as t,b as c,t as i,u as l,p as g,j as m,e as n}from"./vendor.1180558b.js";const r=s=>(g("data-v-39cd9a1d"),s=s(),m(),s),o={class:"home-page slogan"},d={class:"ctn-block"},p={class:"banner text-center"},I=r((()=>c("h1",{class:"home-title"},[c("span",{class:"apache"},"Apache"),n(),c("span",{class:"linkis"},"Linkis"),n(),c("span",{class:"badge"},"Incubating")],-1))),b=["inner [...]
+import{s}from"./index.83dab580.js";import{_ as e}from"./plugin-vue_export-helper.5a098b48.js";import{r as A,o as a,c as t,b as c,t as i,u as l,p as g,j as m,e as n}from"./vendor.1180558b.js";const r=s=>(g("data-v-39cd9a1d"),s=s(),m(),s),o={class:"home-page slogan"},d={class:"ctn-block"},p={class:"banner text-center"},I=r((()=>c("h1",{class:"home-title"},[c("span",{class:"apache"},"Apache"),n(),c("span",{class:"linkis"},"Linkis"),n(),c("span",{class:"badge"},"Incubating")],-1))),b=["inner [...]
diff --git a/assets/linkis.cdbb993f.js b/assets/linkis.513065ec.js
similarity index 98%
rename from assets/linkis.cdbb993f.js
rename to assets/linkis.513065ec.js
index c9d3758..1dff5af 100644
--- a/assets/linkis.cdbb993f.js
+++ b/assets/linkis.513065ec.js
@@ -1 +1 @@
-import{o as n,c as l,b as e,e as t,r as o,l as a,u as s}from"./vendor.1180558b.js";var i="/assets/Linkis1.0_combined_eureka.dad2589e.png";const u={class:"markdown-body"},r=[e("h1",null,"Linkis1.0 Deployment document",-1),e("h2",null,"Notes",-1),e("p",null,[t("If you are new to Linkis, you can ignore this chapter, however, if you are already a Linkis user, we recommend you reading the following article before installing or upgrading: "),e("a",{href:"/#/docs/architecture/difference"},"Brie [...]
+import{o as n,c as l,b as e,e as t,r as o,l as a,u as s}from"./vendor.1180558b.js";var i="/assets/Linkis1.0_combined_eureka.dad2589e.png";const u={class:"markdown-body"},r=[e("h1",null,"Linkis1.0 Deployment document",-1),e("h2",null,"Notes",-1),e("p",null,[t("If you are new to Linkis, you can ignore this chapter, however, if you are already a Linkis user, we recommend you reading the following article before installing or upgrading: "),e("a",{href:"/#/docs/architecture/difference"},"Brie [...]
diff --git a/assets/manager.6973d707.js b/assets/manager.6973d707.js
index c5a67e3..130fac1 100644
--- a/assets/manager.6973d707.js
+++ b/assets/manager.6973d707.js
@@ -1 +1 @@
-import{o as n,c as l,b as e,e as t,r as a,l as i,u}from"./vendor.1180558b.js";var r="/assets/linkis-manager-01.fb5e443a.png",o="/assets/app-manager-03.5aaff6ed.png",s="/assets/resource-manager-01.86e09124.png";const g={class:"markdown-body"},c=[e("h1",null,"LinkisManager Architecture Design",-1),e("p",null,"        As an independent microservice of Linkis, LinkisManager provides AppManager (application management), ResourceManager (resource management), and LabelManager (label management [...]
+import{o as n,c as l,b as e,e as t,r as a,l as i,u}from"./vendor.1180558b.js";var r="/assets/linkis-manager-01.fb5e443a.png",o="/assets/app-manager-01.5aaff6ed.png",s="/assets/resource-manager-01.86e09124.png";const g={class:"markdown-body"},c=[e("h1",null,"LinkisManager Architecture Design",-1),e("p",null,"        As an independent microservice of Linkis, LinkisManager provides AppManager (application management), ResourceManager (resource management), and LabelManager (label management [...]
diff --git a/index.html b/index.html
index d59fbc2..76746ab 100644
--- a/index.html
+++ b/index.html
@@ -6,7 +6,7 @@
     <link rel="icon" href="/favicon.ico" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
     <title>Apache Linkis</title>
-    <script type="module" crossorigin src="/assets/index.c319b82e.js"></script>
+    <script type="module" crossorigin src="/assets/index.83dab580.js"></script>
     <link rel="modulepreload" href="/assets/vendor.1180558b.js">
     <link rel="stylesheet" href="/assets/index.2b54ad83.css">
   </head>

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 07/50: ADD: 增加首页的concept和features模块

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 653248b385ca1ce572aa36ca4da5152796fba6d0
Author: lucaszhu <lu...@webank.com>
AuthorDate: Thu Sep 30 14:52:52 2021 +0800

    ADD: 增加首页的concept和features模块
---
 src/pages/home.vue | 47 +++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 45 insertions(+), 2 deletions(-)

diff --git a/src/pages/home.vue b/src/pages/home.vue
index 321f7c2..07b7ab3 100644
--- a/src/pages/home.vue
+++ b/src/pages/home.vue
@@ -1,6 +1,6 @@
 <template>
-  <div class="ctn-block home-page text-center">
-    <div class="banner">
+  <div class="ctn-block home-page">
+    <div class="banner text-center">
       <h1 class="home-title"><span class="apache">Apache</span> <span class="linkis">Linkis</span> <span class="badge">Incubating</span></h1>
       <p class="home-desc">Decouple the upper applications and the underlying data<br>engines by building a middleware layer.</p>
       <div class="botton-row">
@@ -9,6 +9,18 @@
       </div>
     </div>
     <h1 class="home-block-title text-center">Computation Governance Concept</h1>
+    <div class="concept home-block">
+      <div class="concept-item">
+        <h3 class="concept-title">Before</h3>
+        <p class="concept-desc">Each upper application directly connects to and accesses various underlying engines in a tightly coupled way, which makes big data platform a complex network architecture.</p>
+        <!-- <img src="" alt="before" class="concept-image"> -->
+      </div>
+      <div class="concept-item">
+        <h3 class="concept-title">After</h3>
+        <p class="concept-desc">Build a common layer of "computation middleware" between the numerous upper-layer applications and the countless underlying engines to resolve these complex connection problems in a standardized reusable way</p>
+        <!-- <img src="" alt="after" class="concept-image"> -->
+      </div>
+    </div>
     <h1 class="home-block-title text-center">Core Features</h1>
     <div class="features home-block">
       <div class="feature-item">
@@ -55,6 +67,7 @@
   </div>
 </template>
 <style lang="less" scoped>
+  @import url('/src/style/virables.less');
   @import url('/src/style/base.less');
 
   .home-page {
@@ -65,6 +78,32 @@
     .home-block{
       padding: 20px 0 88px;
     }
+    .concept{
+      display: grid;
+      grid-template-columns: repeat(2, 1fr);
+      grid-column-gap: 20px;
+      .concept-item{
+        padding: 30px 20px;
+        border: 1px dashed #979797;
+        border-radius: 10px;
+        .concept-title{
+          font-size: 24px;
+          line-height: 34px;
+          margin-bottom: 16px;
+          color: @enhance-color;
+        }
+        .concept-desc{
+          font-size: 18px;
+          color: #4A4A4A;
+          line-height: 26px;
+          font-weight: 400;
+          margin-bottom: 16px;
+        }
+        .concept-image{
+          width: 100%;
+        }
+      }
+    }
     .show-case{
       display: grid;
       grid-template-columns: repeat(5, 1fr);
@@ -86,6 +125,10 @@
         background: #FFFFFF;
         box-shadow: 0 0 16px 0 rgba(211,211,211,0.50);
         border-radius: 10px;
+        padding-top: 100px;
+        background-repeat: no-repeat;
+        background-size: 100%;
+        background-position: center top;
         .item-content{
           padding: 30px 20px;
           text-align: left;

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 06/50: FIX ROUTER

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit fac6d066d28e07e8cab54cdc0bc479d2cf753124
Author: lucaszhu <lu...@webank.com>
AuthorDate: Wed Sep 29 16:50:05 2021 +0800

    FIX ROUTER
---
 src/router.js | 41 +++++++++++++++++++++++++++++------------
 1 file changed, 29 insertions(+), 12 deletions(-)

diff --git a/src/router.js b/src/router.js
index d7f0ae9..db94704 100644
--- a/src/router.js
+++ b/src/router.js
@@ -1,16 +1,33 @@
-const routes = [
-  {
+const routes = [{
     path: '/',
-    component: () => import(/* webpackChunkName: "group-app" */ './app.vue'),
-    children: [
-      { path: '', name: 'home', component: () => import(/* webpackChunkName: "group-home" */ './pages/home.vue') },
-      { path: 'docs', name: 'docs', component: () => import(/* webpackChunkName: "group-docs" */ './pages/docs.vue') },
-      { path: 'faq', name: 'faq', component: () => import(/* webpackChunkName: "group-faq" */ './pages/faq.vue') },
-      { path: 'download', name: 'download', component: () => import(/* webpackChunkName: "group-download" */ './pages/download.vue') },
-      { path: 'blog', name: 'blog', component: () => import(/* webpackChunkName: "group-blog" */ './pages/blog.vue') },
-      { path: 'team', name: 'team', component: () => import(/* webpackChunkName: "group-team" */ './pages/team.vue') },
-    ]
-  }
+    name: 'home',
+    component: () => import( /* webpackChunkName: "group-home" */ './pages/home.vue')
+  },
+  {
+    path: '/docs',
+    name: 'docs',
+    component: () => import( /* webpackChunkName: "group-docs" */ './pages/docs.vue')
+  },
+  {
+    path: '/faq',
+    name: 'faq',
+    component: () => import( /* webpackChunkName: "group-faq" */ './pages/faq.vue')
+  },
+  {
+    path: '/download',
+    name: 'download',
+    component: () => import( /* webpackChunkName: "group-download" */ './pages/download.vue')
+  },
+  {
+    path: '/blog',
+    name: 'blog',
+    component: () => import( /* webpackChunkName: "group-blog" */ './pages/blog.vue')
+  },
+  {
+    path: '/team',
+    name: 'team',
+    component: () => import( /* webpackChunkName: "group-team" */ './pages/team.vue')
+  },
 ]
 
 export default routes;
\ No newline at end of file

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 25/50: ADD: blog detail page

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 920552834ac7b105de3ca60c0eef82aa3228a09d
Author: lucaszhu <lu...@webank.com>
AuthorDate: Wed Oct 13 16:21:37 2021 +0800

    ADD: blog detail page
---
 src/pages/blog.vue       | 62 +++++++++++++++++++++++++++++++++++++++++++++++-
 src/pages/docs/index.vue | 29 ----------------------
 src/style/base.less      | 28 ++++++++++++++++++++++
 3 files changed, 89 insertions(+), 30 deletions(-)

diff --git a/src/pages/blog.vue b/src/pages/blog.vue
index 08ba385..f8e3934 100644
--- a/src/pages/blog.vue
+++ b/src/pages/blog.vue
@@ -1,3 +1,63 @@
 <template>
-  <div>blog</div>
+  <div class="ctn-block reading-area blog-ctn">
+    <main class="main-content">
+      <h1 class="blog-title">Born at China’s WeBank, now incubating in the ASF - Introducing Apache Linkis</h1>
+      <!-- <div class="blog-info seperator"><span class="info-item">enjoyyin</span><span class="info-item">2021-9-2</span></div>
+      <div class="blog-info seperator"><span class="info-item">5 min read</span><span class="info-item">tag</span></div> -->
+    </main>
+    <div class="side-bar">
+      <router-link :to="doc.link" class="bar-item" v-for="(doc,index) in docs" :key="index">{{doc.title}}
+        <router-link :to="children.link" class="bar-item" v-for="(children,cindex) in doc.children" :key="cindex">
+          {{children.title}}
+        </router-link>
+      </router-link>
+    </div>
+  </div>
 </template>
+<style lang="less" scoped>
+  .blog-ctn {
+    padding-top: 60px;
+    padding-bottom: 80px;
+
+    .blog-title {
+      font-size: 24px;
+    }
+
+    .blog-info{
+      display: flex;
+      padding: 20px 0;
+      font-size: 16px;
+      color: rgba(15,18,34,0.45);
+      &.seperator{
+        .info-item{
+          border-right: 1px solid rgba(15,18,34,0.45);
+          &:last-child{
+            border-right: 0;
+          }
+        }
+      }
+      .info-item{
+        padding: 0 20px 0 28px;
+      }
+    }
+  }
+</style>
+<script setup>
+  const docs = [{
+    title: '部署文档',
+    link: '/docs/deploy/linkis',
+    children: [{
+      title: '快速部署 Linkis1.0',
+      link: '/docs/deploy/linkis',
+    }, {
+      title: '快速安装 EngineConnPlugin 引擎插件',
+      link: '/docs/deploy/engins',
+    }, {
+      title: 'Linkis1.0 分布式部署手册',
+      link: '/docs/deploy/distributed',
+    }, {
+      title: 'Linkis1.0 安装包目录层级结构详解',
+      link: '/docs/deploy/structure',
+    }]
+  }, ];
+</script>
\ No newline at end of file
diff --git a/src/pages/docs/index.vue b/src/pages/docs/index.vue
index 26b759a..d40fce9 100644
--- a/src/pages/docs/index.vue
+++ b/src/pages/docs/index.vue
@@ -12,35 +12,6 @@
         </div>
     </div>
 </template>
-<style lang="less">
-    @import url('/src/style/variable.less');
-    .reading-area {
-        display: flex;
-        padding: 60px 0;
-        min-height: 600px;
-
-        .main-content {
-            width: 900px;
-            padding: 30px;
-        }
-
-        .side-bar {
-            flex: 1;
-            padding: 18px 0;
-            border-left: 1px solid #eaecef;
-
-            .bar-item {
-                display: block;
-                padding: 5px 18px;
-                color: #4A4A4A;
-                &:hover,
-                &.router-link-exact-active {
-                    color: @active-color;
-                }
-            }
-        }
-    }
-</style>
 <script setup>
     const docs = [
         {
diff --git a/src/style/base.less b/src/style/base.less
index f44d3b9..2a815af 100644
--- a/src/style/base.less
+++ b/src/style/base.less
@@ -49,3 +49,31 @@ a:visited {
 .text-center {
   text-align: center;
 }
+
+.reading-area {
+  display: flex;
+  padding: 60px 0;
+  min-height: 600px;
+
+  .main-content {
+    width: 900px;
+    padding: 30px;
+  }
+
+  .side-bar {
+    flex: 1;
+    padding: 18px 0;
+    border-left: 1px solid #eaecef;
+
+    .bar-item {
+      display: block;
+      padding: 5px 18px;
+      color: #4A4A4A;
+
+      &:hover,
+      &.router-link-exact-active {
+        color: @active-color;
+      }
+    }
+  }
+}
\ No newline at end of file

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 24/50: ADD: 增加团队页面

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 8650f31fe9ad929d4abcb7e4eac538ab1c2ce78b
Author: lucaszhu <lu...@webank.com>
AuthorDate: Wed Oct 13 11:33:55 2021 +0800

    ADD: 增加团队页面
---
 src/pages/team.vue | 194 ++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 193 insertions(+), 1 deletion(-)

diff --git a/src/pages/team.vue b/src/pages/team.vue
index e98fedf..1c6e49e 100644
--- a/src/pages/team.vue
+++ b/src/pages/team.vue
@@ -1,3 +1,195 @@
 <template>
-  <div>team</div>
+  <div class="ctn-block team-page">
+    <h3 class="team-title">PMC</h3>
+    <p class="team-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in Github release page.</p>
+    <ul class="character-list">
+      <li class="character-item text-center">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item text-center">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item text-center">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item text-center">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item text-center">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item text-center">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item text-center">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+    </ul>
+    <h3 class="team-title">Committer</h3>
+    <p class="team-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in Github release page.</p>
+    <ul class="character-list committer">
+      <li class="character-item">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+      <li class="character-item">
+        <div class="character-avatar"></div>
+        <div class="character-desc">
+          <h3 class="character-name">lululu</h3>
+          <a href="" class="character-link">@lululu</a>
+        </div>
+      </li>
+    </ul>
+    <h3 class="team-title">Contributors</h3>
+    <p class="team-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in Github release page.</p>
+    <ul class="contributor-list">
+      <li class="contributor-item">apache/apisix-go-plugin-runner</li>
+    </ul>
+  </div>
 </template>
+<style lang="less" scoped>
+@import url('/src/style/variable.less');
+.team-page{
+  padding-top: 60px;
+  .team-title{
+    font-size: 24px;
+    line-height: 34px;
+  }
+  .team-desc{
+    color: @enhance-color;
+    font-weight: 400;
+  }
+  .contributor-list{
+    padding: 20px 0 40px;
+    .contributor-item{
+      display: inline-block;
+      margin-right: 20px;
+      margin-bottom: 20px;
+      padding: 16px 16px 16px 48px;
+      background-size: 24px;
+      background-position: 16px center;
+      background-repeat: no-repeat;
+      color: @enhance-color;
+      border: 1px solid rgba(15,18,34,0.20);
+      border-radius: 4px;
+      &:last-child{
+        margin-right: 0;
+      }
+    }
+  }
+  .character-list {
+    display: grid;
+    grid-template-columns: repeat(6, 1fr);
+    grid-column-gap: 20px;
+    grid-row-gap: 20px;
+    padding: 20px 0 60px;
+    &.committer{
+      grid-template-columns: repeat(5, 224px);
+      .character-item{
+        display: flex;
+        padding: 20px;
+        align-items: center;
+        .character-avatar{
+          width: 60px;
+          height: 60px;
+          margin: 0;
+        }
+        .character-desc{
+          flex: 1;
+          padding-left: 16px;
+          min-width: 0;
+        }
+      }
+    }
+    .character-item{
+      border: 1px solid rgba(15,18,34,0.20);
+      border-radius: 4px;
+      // 辅助处理文字溢出
+      min-width: 0;
+      padding: 0 20px 20px;
+      .character-avatar{
+        width: 120px;
+        height: 120px;
+        margin: 30px auto 10px;
+        background: #D8D8D8;
+        border-radius: 50%;
+      }
+      .character-name{
+        color: @enhance-color;
+        line-height: 24px;
+        font-size: 16px;
+        white-space: nowrap;
+        overflow: hidden;
+        text-overflow: ellipsis;
+      }
+      .character-link{
+        color: rgba(15,18,34,0.65);
+        font-weight: 400;
+        white-space: nowrap;
+        overflow: hidden;
+        text-overflow: ellipsis;
+      }
+    }
+  }
+}
+</style>
+

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 31/50: user case img

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit d2c09f04e18f199b8e753733afe65b5028330277
Author: casionone <ca...@gmail.com>
AuthorDate: Mon Oct 18 12:06:16 2021 +0800

    user case img
---
 src/assets/user/360.png                            | Bin 0 -> 14956 bytes
 src/assets/user/97wulian.png                       | Bin 28819 -> 19333 bytes
 "src/assets/user/T3\345\207\272\350\241\214.png"   | Bin 7258 -> 11196 bytes
 src/assets/user/aisino.png                         | Bin 46944 -> 33715 bytes
 src/assets/user/boss.png                           | Bin 8386 -> 9165 bytes
 src/assets/user/huazhong.jpg                       | Bin 12673 -> 9938 bytes
 src/assets/user/lianchuang.png                     | Bin 11438 -> 34598 bytes
 src/assets/user/mobtech..png                       | Bin 1829 -> 11203 bytes
 src/assets/user/others/360.png                     | Bin 14323 -> 0 bytes
 ...60\221\347\224\237\351\223\266\350\241\214.jpg" | Bin 16640 -> 0 bytes
 ...70\255\345\233\275\347\224\265\347\247\221.jpg" | Bin 5955 -> 0 bytes
 ...72\221\345\233\276\347\247\221\346\212\200.png" | Bin 35242 -> 0 bytes
 ...72\244\351\200\232\351\223\266\350\241\214.jpg" | Bin 8099 -> 0 bytes
 ...72\254\344\270\234\346\225\260\347\247\221.jpg" | Bin 7895 -> 0 bytes
 .../\345\244\251\347\277\274\344\272\221.png"      | Bin 39592 -> 0 bytes
 ...13\233\345\225\206\351\223\266\350\241\214.jpg" | Bin 10462 -> 0 bytes
 ...31\276\344\277\241\351\223\266\350\241\214.jpg" | Bin 6739 -> 0 bytes
 ...76\216\345\233\242\347\202\271\350\257\204.jpg" | Bin 10596 -> 0 bytes
 ...05\276\350\256\257\350\264\242\347\273\217.jpg" | Bin 14500 -> 0 bytes
 ...24\232\346\235\245\346\261\275\350\275\246.jpg" | Bin 7034 -> 0 bytes
 ...02\256\346\224\277\351\223\266\350\241\214.jpg" | Bin 14657 -> 0 bytes
 src/assets/user/xidian.jpg                         | Bin 12475 -> 9354 bytes
 src/assets/user/yitu.png                           | Bin 41437 -> 16224 bytes
 src/assets/user/zhongticaipng.png                  | Bin 31958 -> 22253 bytes
 ...70\207\347\247\221\351\207\207\347\255\221.png" | Bin 2468 -> 4479 bytes
 .../user/\344\270\234\346\226\271\351\200\232.png" | Bin 33873 -> 20974 bytes
 ...60\221\347\224\237\351\223\266\350\241\214.jpg" | Bin 0 -> 5007 bytes
 ...70\255\345\233\275\347\224\265\344\277\241.png" | Bin 6468 -> 11450 bytes
 ...70\255\345\233\275\347\224\265\347\247\221.jpg" | Bin 0 -> 5108 bytes
 ...70\255\351\200\232\344\272\221\344\273\223.png" | Bin 20138 -> 27653 bytes
 ...34\211\351\231\220\345\205\254\345\217\270.png" | Bin 10006 -> 19180 bytes
 ...61\237\345\256\236\351\252\214\345\256\244.png" | Bin 13145 -> 17558 bytes
 ...72\221\345\233\276\347\247\221\346\212\200.png" | Bin 0 -> 23360 bytes
 ...72\244\351\200\232\351\223\266\350\241\214.jpg" | Bin 0 -> 6173 bytes
 ...72\254\344\270\234\346\225\260\347\247\221.jpg" | Bin 0 -> 4260 bytes
 ...77\241\347\224\250\347\224\237\346\264\273.png" | Bin 3978 -> 10504 bytes
 .../user/\345\223\227\345\225\246\345\225\246.jpg" | Bin 5990 -> 2707 bytes
 ...34\210\345\244\226\345\220\214\345\255\246.png" | Bin 8081 -> 15296 bytes
 .../user/\345\244\251\347\277\274\344\272\221.png" | Bin 0 -> 24944 bytes
 "src/assets/user/\345\271\263\345\256\211.png"     | Bin 20795 -> 19563 bytes
 ...14\273\344\277\235\347\247\221\346\212\200.png" | Bin 2083 -> 9949 bytes
 ...72\221\345\276\231\347\247\221\346\212\200.png" | Bin 15448 -> 5315 bytes
 ...03\275\345\244\247\346\225\260\346\215\256.png" | Bin 13462 -> 20687 bytes
 ...13\233\345\225\206\351\223\266\350\241\214.jpg" | Bin 0 -> 5594 bytes
 ...34\211\351\231\220\345\205\254\345\217\270.png" | Bin 29500 -> 21785 bytes
 ...24\265\351\255\202\347\275\221\347\273\234.png" | Bin 5553 -> 8600 bytes
 ...41\224\345\255\220\345\210\206\346\234\237.png" | Bin 6968 -> 16286 bytes
 ...65\267\345\272\267\345\250\201\350\247\206.png" | Bin 22412 -> 27218 bytes
 ...20\206\346\203\263\346\261\275\350\275\246.png" | Bin 27672 -> 16511 bytes
 ...31\276\344\277\241\351\223\266\350\241\214.jpg" | Bin 0 -> 4048 bytes
 .../user/\347\231\276\346\234\233\344\272\221.png" | Bin 24473 -> 17617 bytes
 ...53\213\345\210\233\345\225\206\345\237\216.png" | Bin 24213 -> 27107 bytes
 ...72\242\350\261\241\344\272\221\350\205\276.png" | Bin 4596 -> 10362 bytes
 ...76\216\345\233\242\347\202\271\350\257\204.jpg" | Bin 0 -> 5183 bytes
 ...05\276\350\256\257\350\264\242\347\273\217.jpg" | Bin 0 -> 6136 bytes
 ...11\276\344\275\263\347\224\237\346\264\273.jpg" | Bin 5444 -> 4355 bytes
 ...20\250\346\221\251\350\200\266\344\272\221.png" | Bin 5501 -> 10090 bytes
 ...24\232\346\235\245\346\261\275\350\275\246.jpg" | Bin 0 -> 5672 bytes
 ...02\256\346\224\277\351\223\266\350\241\214.jpg" | Bin 0 -> 6134 bytes
 ...41\266\347\202\271\350\275\257\344\273\266.png" | Bin 8796 -> 12568 bytes
 src/pages/home/img.js                              |  50 +++++++++++++++++++++
 src/pages/home/index.vue                           |  15 +++----
 62 files changed, 55 insertions(+), 10 deletions(-)

diff --git a/src/assets/user/360.png b/src/assets/user/360.png
new file mode 100644
index 0000000..88e0d4c
Binary files /dev/null and b/src/assets/user/360.png differ
diff --git a/src/assets/user/97wulian.png b/src/assets/user/97wulian.png
index 5b828b1..6d72b3f 100644
Binary files a/src/assets/user/97wulian.png and b/src/assets/user/97wulian.png differ
diff --git "a/src/assets/user/T3\345\207\272\350\241\214.png" "b/src/assets/user/T3\345\207\272\350\241\214.png"
index 1491def..b041927 100644
Binary files "a/src/assets/user/T3\345\207\272\350\241\214.png" and "b/src/assets/user/T3\345\207\272\350\241\214.png" differ
diff --git a/src/assets/user/aisino.png b/src/assets/user/aisino.png
index 73b7589..d35e2ce 100644
Binary files a/src/assets/user/aisino.png and b/src/assets/user/aisino.png differ
diff --git a/src/assets/user/boss.png b/src/assets/user/boss.png
index 17bb2b2..e96f42a 100644
Binary files a/src/assets/user/boss.png and b/src/assets/user/boss.png differ
diff --git a/src/assets/user/huazhong.jpg b/src/assets/user/huazhong.jpg
index 70e557f..4821862 100644
Binary files a/src/assets/user/huazhong.jpg and b/src/assets/user/huazhong.jpg differ
diff --git a/src/assets/user/lianchuang.png b/src/assets/user/lianchuang.png
index 1320cbe..64c44b4 100644
Binary files a/src/assets/user/lianchuang.png and b/src/assets/user/lianchuang.png differ
diff --git a/src/assets/user/mobtech..png b/src/assets/user/mobtech..png
index 0ba017e..d026cff 100644
Binary files a/src/assets/user/mobtech..png and b/src/assets/user/mobtech..png differ
diff --git a/src/assets/user/others/360.png b/src/assets/user/others/360.png
deleted file mode 100644
index 74b5d13..0000000
Binary files a/src/assets/user/others/360.png and /dev/null differ
diff --git "a/src/assets/user/others/\344\270\255\345\233\275\346\260\221\347\224\237\351\223\266\350\241\214.jpg" "b/src/assets/user/others/\344\270\255\345\233\275\346\260\221\347\224\237\351\223\266\350\241\214.jpg"
deleted file mode 100644
index e5fb3b5..0000000
Binary files "a/src/assets/user/others/\344\270\255\345\233\275\346\260\221\347\224\237\351\223\266\350\241\214.jpg" and /dev/null differ
diff --git "a/src/assets/user/others/\344\270\255\345\233\275\347\224\265\347\247\221.jpg" "b/src/assets/user/others/\344\270\255\345\233\275\347\224\265\347\247\221.jpg"
deleted file mode 100644
index 589617f..0000000
Binary files "a/src/assets/user/others/\344\270\255\345\233\275\347\224\265\347\247\221.jpg" and /dev/null differ
diff --git "a/src/assets/user/others/\344\272\221\345\233\276\347\247\221\346\212\200.png" "b/src/assets/user/others/\344\272\221\345\233\276\347\247\221\346\212\200.png"
deleted file mode 100644
index 249aaaa..0000000
Binary files "a/src/assets/user/others/\344\272\221\345\233\276\347\247\221\346\212\200.png" and /dev/null differ
diff --git "a/src/assets/user/others/\344\272\244\351\200\232\351\223\266\350\241\214.jpg" "b/src/assets/user/others/\344\272\244\351\200\232\351\223\266\350\241\214.jpg"
deleted file mode 100644
index c2232c7..0000000
Binary files "a/src/assets/user/others/\344\272\244\351\200\232\351\223\266\350\241\214.jpg" and /dev/null differ
diff --git "a/src/assets/user/others/\344\272\254\344\270\234\346\225\260\347\247\221.jpg" "b/src/assets/user/others/\344\272\254\344\270\234\346\225\260\347\247\221.jpg"
deleted file mode 100644
index 7a98336..0000000
Binary files "a/src/assets/user/others/\344\272\254\344\270\234\346\225\260\347\247\221.jpg" and /dev/null differ
diff --git "a/src/assets/user/others/\345\244\251\347\277\274\344\272\221.png" "b/src/assets/user/others/\345\244\251\347\277\274\344\272\221.png"
deleted file mode 100644
index 8973744..0000000
Binary files "a/src/assets/user/others/\345\244\251\347\277\274\344\272\221.png" and /dev/null differ
diff --git "a/src/assets/user/others/\346\213\233\345\225\206\351\223\266\350\241\214.jpg" "b/src/assets/user/others/\346\213\233\345\225\206\351\223\266\350\241\214.jpg"
deleted file mode 100644
index 8f3d41a..0000000
Binary files "a/src/assets/user/others/\346\213\233\345\225\206\351\223\266\350\241\214.jpg" and /dev/null differ
diff --git "a/src/assets/user/others/\347\231\276\344\277\241\351\223\266\350\241\214.jpg" "b/src/assets/user/others/\347\231\276\344\277\241\351\223\266\350\241\214.jpg"
deleted file mode 100644
index e338788..0000000
Binary files "a/src/assets/user/others/\347\231\276\344\277\241\351\223\266\350\241\214.jpg" and /dev/null differ
diff --git "a/src/assets/user/others/\347\276\216\345\233\242\347\202\271\350\257\204.jpg" "b/src/assets/user/others/\347\276\216\345\233\242\347\202\271\350\257\204.jpg"
deleted file mode 100644
index 33fda33..0000000
Binary files "a/src/assets/user/others/\347\276\216\345\233\242\347\202\271\350\257\204.jpg" and /dev/null differ
diff --git "a/src/assets/user/others/\350\205\276\350\256\257\350\264\242\347\273\217.jpg" "b/src/assets/user/others/\350\205\276\350\256\257\350\264\242\347\273\217.jpg"
deleted file mode 100644
index d409f43..0000000
Binary files "a/src/assets/user/others/\350\205\276\350\256\257\350\264\242\347\273\217.jpg" and /dev/null differ
diff --git "a/src/assets/user/others/\350\224\232\346\235\245\346\261\275\350\275\246.jpg" "b/src/assets/user/others/\350\224\232\346\235\245\346\261\275\350\275\246.jpg"
deleted file mode 100644
index c1df2ac..0000000
Binary files "a/src/assets/user/others/\350\224\232\346\235\245\346\261\275\350\275\246.jpg" and /dev/null differ
diff --git "a/src/assets/user/others/\351\202\256\346\224\277\351\223\266\350\241\214.jpg" "b/src/assets/user/others/\351\202\256\346\224\277\351\223\266\350\241\214.jpg"
deleted file mode 100644
index 02356c9..0000000
Binary files "a/src/assets/user/others/\351\202\256\346\224\277\351\223\266\350\241\214.jpg" and /dev/null differ
diff --git a/src/assets/user/xidian.jpg b/src/assets/user/xidian.jpg
index dc37326..558341e 100644
Binary files a/src/assets/user/xidian.jpg and b/src/assets/user/xidian.jpg differ
diff --git a/src/assets/user/yitu.png b/src/assets/user/yitu.png
index 58aaa3f..8bf51ea 100644
Binary files a/src/assets/user/yitu.png and b/src/assets/user/yitu.png differ
diff --git a/src/assets/user/zhongticaipng.png b/src/assets/user/zhongticaipng.png
index c343ba5..eb97549 100644
Binary files a/src/assets/user/zhongticaipng.png and b/src/assets/user/zhongticaipng.png differ
diff --git "a/src/assets/user/\344\270\207\347\247\221\351\207\207\347\255\221.png" "b/src/assets/user/\344\270\207\347\247\221\351\207\207\347\255\221.png"
index 35f056c..58e60be 100644
Binary files "a/src/assets/user/\344\270\207\347\247\221\351\207\207\347\255\221.png" and "b/src/assets/user/\344\270\207\347\247\221\351\207\207\347\255\221.png" differ
diff --git "a/src/assets/user/\344\270\234\346\226\271\351\200\232.png" "b/src/assets/user/\344\270\234\346\226\271\351\200\232.png"
index 72fde94..852fd81 100644
Binary files "a/src/assets/user/\344\270\234\346\226\271\351\200\232.png" and "b/src/assets/user/\344\270\234\346\226\271\351\200\232.png" differ
diff --git "a/src/assets/user/\344\270\255\345\233\275\346\260\221\347\224\237\351\223\266\350\241\214.jpg" "b/src/assets/user/\344\270\255\345\233\275\346\260\221\347\224\237\351\223\266\350\241\214.jpg"
new file mode 100644
index 0000000..3e72301
Binary files /dev/null and "b/src/assets/user/\344\270\255\345\233\275\346\260\221\347\224\237\351\223\266\350\241\214.jpg" differ
diff --git "a/src/assets/user/\344\270\255\345\233\275\347\224\265\344\277\241.png" "b/src/assets/user/\344\270\255\345\233\275\347\224\265\344\277\241.png"
index f34cc37..76bcf7a 100644
Binary files "a/src/assets/user/\344\270\255\345\233\275\347\224\265\344\277\241.png" and "b/src/assets/user/\344\270\255\345\233\275\347\224\265\344\277\241.png" differ
diff --git "a/src/assets/user/\344\270\255\345\233\275\347\224\265\347\247\221.jpg" "b/src/assets/user/\344\270\255\345\233\275\347\224\265\347\247\221.jpg"
new file mode 100644
index 0000000..328dfa8
Binary files /dev/null and "b/src/assets/user/\344\270\255\345\233\275\347\224\265\347\247\221.jpg" differ
diff --git "a/src/assets/user/\344\270\255\351\200\232\344\272\221\344\273\223.png" "b/src/assets/user/\344\270\255\351\200\232\344\272\221\344\273\223.png"
index 7a27229..bf374b6 100644
Binary files "a/src/assets/user/\344\270\255\351\200\232\344\272\221\344\273\223.png" and "b/src/assets/user/\344\270\255\351\200\232\344\272\221\344\273\223.png" differ
diff --git "a/src/assets/user/\344\270\255\351\200\232\346\234\215\345\205\254\344\274\227\344\277\241\346\201\257\350\202\241\344\273\275\346\234\211\351\231\220\345\205\254\345\217\270.png" "b/src/assets/user/\344\270\255\351\200\232\346\234\215\345\205\254\344\274\227\344\277\241\346\201\257\350\202\241\344\273\275\346\234\211\351\231\220\345\205\254\345\217\270.png"
index 8946372..cf4a9c3 100644
Binary files "a/src/assets/user/\344\270\255\351\200\232\346\234\215\345\205\254\344\274\227\344\277\241\346\201\257\350\202\241\344\273\275\346\234\211\351\231\220\345\205\254\345\217\270.png" and "b/src/assets/user/\344\270\255\351\200\232\346\234\215\345\205\254\344\274\227\344\277\241\346\201\257\350\202\241\344\273\275\346\234\211\351\231\220\345\205\254\345\217\270.png" differ
diff --git "a/src/assets/user/\344\271\213\346\261\237\345\256\236\351\252\214\345\256\244.png" "b/src/assets/user/\344\271\213\346\261\237\345\256\236\351\252\214\345\256\244.png"
index 1fbe9ce..5e8bb40 100644
Binary files "a/src/assets/user/\344\271\213\346\261\237\345\256\236\351\252\214\345\256\244.png" and "b/src/assets/user/\344\271\213\346\261\237\345\256\236\351\252\214\345\256\244.png" differ
diff --git "a/src/assets/user/\344\272\221\345\233\276\347\247\221\346\212\200.png" "b/src/assets/user/\344\272\221\345\233\276\347\247\221\346\212\200.png"
new file mode 100644
index 0000000..ecca9a8
Binary files /dev/null and "b/src/assets/user/\344\272\221\345\233\276\347\247\221\346\212\200.png" differ
diff --git "a/src/assets/user/\344\272\244\351\200\232\351\223\266\350\241\214.jpg" "b/src/assets/user/\344\272\244\351\200\232\351\223\266\350\241\214.jpg"
new file mode 100644
index 0000000..67dc266
Binary files /dev/null and "b/src/assets/user/\344\272\244\351\200\232\351\223\266\350\241\214.jpg" differ
diff --git "a/src/assets/user/\344\272\254\344\270\234\346\225\260\347\247\221.jpg" "b/src/assets/user/\344\272\254\344\270\234\346\225\260\347\247\221.jpg"
new file mode 100644
index 0000000..4e48bd3
Binary files /dev/null and "b/src/assets/user/\344\272\254\344\270\234\346\225\260\347\247\221.jpg" differ
diff --git "a/src/assets/user/\344\277\241\347\224\250\347\224\237\346\264\273.png" "b/src/assets/user/\344\277\241\347\224\250\347\224\237\346\264\273.png"
index 8a767b1..9af5495 100644
Binary files "a/src/assets/user/\344\277\241\347\224\250\347\224\237\346\264\273.png" and "b/src/assets/user/\344\277\241\347\224\250\347\224\237\346\264\273.png" differ
diff --git "a/src/assets/user/\345\223\227\345\225\246\345\225\246.jpg" "b/src/assets/user/\345\223\227\345\225\246\345\225\246.jpg"
index 3d94cd0..2ae1506 100644
Binary files "a/src/assets/user/\345\223\227\345\225\246\345\225\246.jpg" and "b/src/assets/user/\345\223\227\345\225\246\345\225\246.jpg" differ
diff --git "a/src/assets/user/\345\234\210\345\244\226\345\220\214\345\255\246.png" "b/src/assets/user/\345\234\210\345\244\226\345\220\214\345\255\246.png"
index fc623d4..494cf8f 100644
Binary files "a/src/assets/user/\345\234\210\345\244\226\345\220\214\345\255\246.png" and "b/src/assets/user/\345\234\210\345\244\226\345\220\214\345\255\246.png" differ
diff --git "a/src/assets/user/\345\244\251\347\277\274\344\272\221.png" "b/src/assets/user/\345\244\251\347\277\274\344\272\221.png"
new file mode 100644
index 0000000..0f26451
Binary files /dev/null and "b/src/assets/user/\345\244\251\347\277\274\344\272\221.png" differ
diff --git "a/src/assets/user/\345\271\263\345\256\211.png" "b/src/assets/user/\345\271\263\345\256\211.png"
index 4895178..861fb26 100644
Binary files "a/src/assets/user/\345\271\263\345\256\211.png" and "b/src/assets/user/\345\271\263\345\256\211.png" differ
diff --git "a/src/assets/user/\345\271\263\345\256\211\345\214\273\344\277\235\347\247\221\346\212\200.png" "b/src/assets/user/\345\271\263\345\256\211\345\214\273\344\277\235\347\247\221\346\212\200.png"
index 156be44..7b019f5 100644
Binary files "a/src/assets/user/\345\271\263\345\256\211\345\214\273\344\277\235\347\247\221\346\212\200.png" and "b/src/assets/user/\345\271\263\345\256\211\345\214\273\344\277\235\347\247\221\346\212\200.png" differ
diff --git "a/src/assets/user/\345\271\277\345\267\236\344\272\221\345\276\231\347\247\221\346\212\200.png" "b/src/assets/user/\345\271\277\345\267\236\344\272\221\345\276\231\347\247\221\346\212\200.png"
index 6783b0f..5e027bd 100644
Binary files "a/src/assets/user/\345\271\277\345\267\236\344\272\221\345\276\231\347\247\221\346\212\200.png" and "b/src/assets/user/\345\271\277\345\267\236\344\272\221\345\276\231\347\247\221\346\212\200.png" differ
diff --git "a/src/assets/user/\346\210\220\351\203\275\345\244\247\346\225\260\346\215\256.png" "b/src/assets/user/\346\210\220\351\203\275\345\244\247\346\225\260\346\215\256.png"
index f6a7e4e..dfaa99d 100644
Binary files "a/src/assets/user/\346\210\220\351\203\275\345\244\247\346\225\260\346\215\256.png" and "b/src/assets/user/\346\210\220\351\203\275\345\244\247\346\225\260\346\215\256.png" differ
diff --git "a/src/assets/user/\346\213\233\345\225\206\351\223\266\350\241\214.jpg" "b/src/assets/user/\346\213\233\345\225\206\351\223\266\350\241\214.jpg"
new file mode 100644
index 0000000..b83e1da
Binary files /dev/null and "b/src/assets/user/\346\213\233\345\225\206\351\223\266\350\241\214.jpg" differ
diff --git "a/src/assets/user/\346\213\233\350\201\224\346\266\210\350\264\271\351\207\221\350\236\215\346\234\211\351\231\220\345\205\254\345\217\270.png" "b/src/assets/user/\346\213\233\350\201\224\346\266\210\350\264\271\351\207\221\350\236\215\346\234\211\351\231\220\345\205\254\345\217\270.png"
index 7a39d07..9d2ba48 100644
Binary files "a/src/assets/user/\346\213\233\350\201\224\346\266\210\350\264\271\351\207\221\350\236\215\346\234\211\351\231\220\345\205\254\345\217\270.png" and "b/src/assets/user/\346\213\233\350\201\224\346\266\210\350\264\271\351\207\221\350\236\215\346\234\211\351\231\220\345\205\254\345\217\270.png" differ
diff --git "a/src/assets/user/\346\235\255\345\267\236\347\224\265\351\255\202\347\275\221\347\273\234.png" "b/src/assets/user/\346\235\255\345\267\236\347\224\265\351\255\202\347\275\221\347\273\234.png"
index bc61646..7a720df 100644
Binary files "a/src/assets/user/\346\235\255\345\267\236\347\224\265\351\255\202\347\275\221\347\273\234.png" and "b/src/assets/user/\346\235\255\345\267\236\347\224\265\351\255\202\347\275\221\347\273\234.png" differ
diff --git "a/src/assets/user/\346\241\224\345\255\220\345\210\206\346\234\237.png" "b/src/assets/user/\346\241\224\345\255\220\345\210\206\346\234\237.png"
index 3ff45b8..ff3b65a 100644
Binary files "a/src/assets/user/\346\241\224\345\255\220\345\210\206\346\234\237.png" and "b/src/assets/user/\346\241\224\345\255\220\345\210\206\346\234\237.png" differ
diff --git "a/src/assets/user/\346\265\267\345\272\267\345\250\201\350\247\206.png" "b/src/assets/user/\346\265\267\345\272\267\345\250\201\350\247\206.png"
index a961cc4..0d38210 100644
Binary files "a/src/assets/user/\346\265\267\345\272\267\345\250\201\350\247\206.png" and "b/src/assets/user/\346\265\267\345\272\267\345\250\201\350\247\206.png" differ
diff --git "a/src/assets/user/\347\220\206\346\203\263\346\261\275\350\275\246.png" "b/src/assets/user/\347\220\206\346\203\263\346\261\275\350\275\246.png"
index 3c0c20f..161c4a5 100644
Binary files "a/src/assets/user/\347\220\206\346\203\263\346\261\275\350\275\246.png" and "b/src/assets/user/\347\220\206\346\203\263\346\261\275\350\275\246.png" differ
diff --git "a/src/assets/user/\347\231\276\344\277\241\351\223\266\350\241\214.jpg" "b/src/assets/user/\347\231\276\344\277\241\351\223\266\350\241\214.jpg"
new file mode 100644
index 0000000..130810f
Binary files /dev/null and "b/src/assets/user/\347\231\276\344\277\241\351\223\266\350\241\214.jpg" differ
diff --git "a/src/assets/user/\347\231\276\346\234\233\344\272\221.png" "b/src/assets/user/\347\231\276\346\234\233\344\272\221.png"
index 90395c6..8ce7aef 100644
Binary files "a/src/assets/user/\347\231\276\346\234\233\344\272\221.png" and "b/src/assets/user/\347\231\276\346\234\233\344\272\221.png" differ
diff --git "a/src/assets/user/\347\253\213\345\210\233\345\225\206\345\237\216.png" "b/src/assets/user/\347\253\213\345\210\233\345\225\206\345\237\216.png"
index ca71850..c5520fa 100644
Binary files "a/src/assets/user/\347\253\213\345\210\233\345\225\206\345\237\216.png" and "b/src/assets/user/\347\253\213\345\210\233\345\225\206\345\237\216.png" differ
diff --git "a/src/assets/user/\347\272\242\350\261\241\344\272\221\350\205\276.png" "b/src/assets/user/\347\272\242\350\261\241\344\272\221\350\205\276.png"
index bd54887..fda67c5 100644
Binary files "a/src/assets/user/\347\272\242\350\261\241\344\272\221\350\205\276.png" and "b/src/assets/user/\347\272\242\350\261\241\344\272\221\350\205\276.png" differ
diff --git "a/src/assets/user/\347\276\216\345\233\242\347\202\271\350\257\204.jpg" "b/src/assets/user/\347\276\216\345\233\242\347\202\271\350\257\204.jpg"
new file mode 100644
index 0000000..36e37e3
Binary files /dev/null and "b/src/assets/user/\347\276\216\345\233\242\347\202\271\350\257\204.jpg" differ
diff --git "a/src/assets/user/\350\205\276\350\256\257\350\264\242\347\273\217.jpg" "b/src/assets/user/\350\205\276\350\256\257\350\264\242\347\273\217.jpg"
new file mode 100644
index 0000000..1a2953c
Binary files /dev/null and "b/src/assets/user/\350\205\276\350\256\257\350\264\242\347\273\217.jpg" differ
diff --git "a/src/assets/user/\350\211\276\344\275\263\347\224\237\346\264\273.jpg" "b/src/assets/user/\350\211\276\344\275\263\347\224\237\346\264\273.jpg"
index ab32413..b7380cf 100644
Binary files "a/src/assets/user/\350\211\276\344\275\263\347\224\237\346\264\273.jpg" and "b/src/assets/user/\350\211\276\344\275\263\347\224\237\346\264\273.jpg" differ
diff --git "a/src/assets/user/\350\220\250\346\221\251\350\200\266\344\272\221.png" "b/src/assets/user/\350\220\250\346\221\251\350\200\266\344\272\221.png"
index 5a39dff..84b2c39 100644
Binary files "a/src/assets/user/\350\220\250\346\221\251\350\200\266\344\272\221.png" and "b/src/assets/user/\350\220\250\346\221\251\350\200\266\344\272\221.png" differ
diff --git "a/src/assets/user/\350\224\232\346\235\245\346\261\275\350\275\246.jpg" "b/src/assets/user/\350\224\232\346\235\245\346\261\275\350\275\246.jpg"
new file mode 100644
index 0000000..b0ee1fe
Binary files /dev/null and "b/src/assets/user/\350\224\232\346\235\245\346\261\275\350\275\246.jpg" differ
diff --git "a/src/assets/user/\351\202\256\346\224\277\351\223\266\350\241\214.jpg" "b/src/assets/user/\351\202\256\346\224\277\351\223\266\350\241\214.jpg"
new file mode 100644
index 0000000..7847eac
Binary files /dev/null and "b/src/assets/user/\351\202\256\346\224\277\351\223\266\350\241\214.jpg" differ
diff --git "a/src/assets/user/\351\241\266\347\202\271\350\275\257\344\273\266.png" "b/src/assets/user/\351\241\266\347\202\271\350\275\257\344\273\266.png"
index 8e80dd0..8eef1ff 100644
Binary files "a/src/assets/user/\351\241\266\347\202\271\350\275\257\344\273\266.png" and "b/src/assets/user/\351\241\266\347\202\271\350\275\257\344\273\266.png" differ
diff --git a/src/pages/home/img.js b/src/pages/home/img.js
new file mode 100644
index 0000000..60197d3
--- /dev/null
+++ b/src/pages/home/img.js
@@ -0,0 +1,50 @@
+const  img=[
+    {"url":"邮政银行.jpg"},
+    {"url":"中国民生银行.jpg"},
+    {"url":"美团点评.jpg"},
+    {"url":"中国电信.png"},
+    {"url":"交通银行.jpg"},
+    {"url":"招商银行.jpg"},
+    {"url":"招联消费金融有限公司.png"},
+    {"url":"平安.png"},
+    // {"url":"平安医保科技.png"},
+    {"url":"360.png"},
+    {"url":"海康威视.png"},
+    {"url":"理想汽车.png"},
+    {"url":"百信银行.jpg"},
+    {"url":"百望云.png"},
+    {"url":"立创商城.png"},
+    {"url":"红象云腾.png"},
+    {"url":"腾讯财经.jpg"},
+    {"url":"艾佳生活.jpg"},
+    // {"url":"萨摩耶云.png"},
+    {"url":"蔚来汽车.jpg"},
+    {"url":"顶点软件.png"},
+    {"url":"97wulian.png"},
+    // {"url":"T3出行.png"},
+    {"url":"aisino.png"},
+    {"url":"boss.png"},
+    {"url":"huazhong.jpg"},
+    {"url":"lianchuang.png"},
+    // {"url":"mobtech..png"},
+    {"url":"xidian.jpg"},
+    {"url":"yitu.png"},
+    {"url":"zhongticaipng.png"},
+    {"url":"万科采筑.png"},
+    {"url":"东方通.png"},
+    {"url":"中国电科.jpg"},
+    {"url":"中通云仓.png"},
+    // {"url":"中通服公众信息股份有限公司.png"},
+    // {"url":"之江实验室.png"},
+    {"url":"云图科技.png"},
+    {"url":"京东数科.jpg"},
+    {"url":"信用生活.png"},
+    {"url":"哗啦啦.jpg"},
+    {"url":"圈外同学.png"},
+    {"url":"天翼云.png"},
+    {"url":"广州云徙科技.png"},
+    // {"url":"成都大数据.png"},
+    {"url":"杭州电魂网络.png"},
+    {"url":"桔子分期.png"}
+]
+export default img
diff --git a/src/pages/home/index.vue b/src/pages/home/index.vue
index a619193..def150a 100644
--- a/src/pages/home/index.vue
+++ b/src/pages/home/index.vue
@@ -76,15 +76,9 @@
     </div>
     <h1 class="home-block-title text-center">{{$t('message.common.our_users')}}</h1>
     <div class="show-case home-block">
-      <div class="case-item"><img src="../../assets/user/97wulian.png" alt="xx"/></div>
-      <div class="case-item"><img src="../../assets/user/aisino.png" alt="xx"/></div>
-      <div class="case-item"><img src="../../assets/user/boss.png" alt="xx"/></div>
-      <div class="case-item"><img src="../../assets/user/huazhong.jpg" alt="xx"/></div>
-      <div class="case-item"></div>
-      <div class="case-item"></div>
-      <div class="case-item"></div>
-      <div class="case-item"></div>
-      <div class="case-item"></div>
+      <template  v-for="item in img">
+        <div class="case-item"> <img :src="'../../src/assets/user/'+item.url" alt="name"></div>
+      </template>
     </div>
   </div>
 </template>
@@ -227,6 +221,7 @@
 <script setup>
   import { ref } from "vue"
   import  systemConfiguration from "../../js/config"
-  // 初始化语言
+  import img from "./img";
+
   const lang = ref(localStorage.getItem('locale') || 'en');
 </script>

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 35/50: ADD: 增加logo点击

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit c39fcd9e388947ff80b529c209b329f9cb8053d2
Author: lucaszhu <lu...@webank.com>
AuthorDate: Mon Oct 18 15:03:19 2021 +0800

    ADD: 增加logo点击
---
 src/App.vue         |  13 ++++++++++---
 src/assets/logo.png | Bin 6849 -> 9114 bytes
 2 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/src/App.vue b/src/App.vue
index e13c9cd..37a6678 100644
--- a/src/App.vue
+++ b/src/App.vue
@@ -17,9 +17,10 @@
     <div>
         <nav class="nav">
             <div class="ctn-block">
-                <div class="nav-logo">
-                    Apache Linkis
-                </div>
+                <router-link to="/" class="nav-logo">
+                    <img class="logo" src="/src/assets/logo.png" alt="linkis">
+                    <span>Apache Linkis</span>
+                </router-link>
                 <span class="nav-logo-badge">Incubating</span>
                 <div class="menu-list">
                     <router-link class="menu-item" to="/"><span class="label">{{$t('menu.item.home')}}</span>
@@ -103,8 +104,14 @@
         }
 
         .nav-logo {
+            display: flex;
+            align-items: center;
             line-height: 54px;
             font-weight: 500;
+            .logo{
+                height: 24px;
+                margin-right: 10px;
+            }
         }
 
         .nav-logo-badge {
diff --git a/src/assets/logo.png b/src/assets/logo.png
index f3d2503..9ece550 100644
Binary files a/src/assets/logo.png and b/src/assets/logo.png differ

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 38/50: UPDATE: 优化细节

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 673cfe7892e6c3dd03dab81499effb7f2df6db60
Author: lucaszhu <lu...@webank.com>
AuthorDate: Mon Oct 18 15:32:19 2021 +0800

    UPDATE: 优化细节
---
 src/pages/download.vue        |  4 ++--
 src/pages/team/team.vue       | 10 +++++-----
 src/pages/team/teamdata_en.js |  3 +--
 src/pages/team/teamdata_zh.js |  3 +--
 src/style/base.less           |  7 +++++++
 5 files changed, 16 insertions(+), 11 deletions(-)

diff --git a/src/pages/download.vue b/src/pages/download.vue
index 025cbdd..34a4aa6 100644
--- a/src/pages/download.vue
+++ b/src/pages/download.vue
@@ -1,7 +1,7 @@
 <template>
   <div class="ctn-block normal-page download-page">
-    <h3 class="team-title">Download</h3>
-    <p class="team-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in <a class="desc-link" href="">Github release page</a></p>
+    <h3 class="normal-title">Download</h3>
+    <p class="normal-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in <a class="desc-link" href="">Github release page</a></p>
     <ul class="download-list">
       <li class="download-item">
         <h3 class="item-title"><span>Linkis-1.0.2</span><span><span class="release-date">Release Date: </span>2021-9-2</span></h3>
diff --git a/src/pages/team/team.vue b/src/pages/team/team.vue
index b1286b6..c64f1d2 100644
--- a/src/pages/team/team.vue
+++ b/src/pages/team/team.vue
@@ -1,7 +1,7 @@
 <template>
   <div class="ctn-block normal-page team-page">
-    <h3 class="team-title">PMC</h3>
-    <p class="team-desc">{{jsonData.info.desc}}</p>
+    <h3 class="normal-title">PMC</h3>
+    <p class="normal-desc" v-html="jsonData.info.desc"></p>
     <ul  class="character-list">
       <li v-for="(item,index) in jsonData.list" :key="index" class="character-item text-center">
         <img class="character-avatar" :src="item.avatarUrl" :alt="item.name"/>
@@ -10,9 +10,9 @@
         </div>
       </li>
     </ul>
-    <p class="team-desc" v-html="jsonData.info.tip"></p>
-    <!--   <h3 class="team-title">Contributors</h3>
-     <p class="team-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in Github release page.</p>
+    <p class="normal-desc" v-html="jsonData.info.tip"></p>
+    <!--   <h3 class="normal-title">Contributors</h3>
+     <p class="normal-desc">Use the links below to download the Apache Linkis (Incubating) Releases. See all Linkis releases in Github release page.</p>
     ]<ul class="contributor-list">
       <li class="contributor-item">apache/apisix-go-plugin-runner</li>
      </ul>-->
diff --git a/src/pages/team/teamdata_en.js b/src/pages/team/teamdata_en.js
index e61ddb0..c596640 100644
--- a/src/pages/team/teamdata_en.js
+++ b/src/pages/team/teamdata_en.js
@@ -1,7 +1,6 @@
 const data = {
     info: {
-        desc: "The Linkis team is comprised of Members and Contributors. Members have direct access to the source of Linkis project and actively evolve the code-base. Contributors improve the project through submission of patches and suggestions to the Members. The number of Contributors to the project is unbounded. All contributions to Linkis are greatly appreciated, whether for trivial cleanups, big new features or other material rewards.",
-        tip: "If you want to contribute, you can go directly to the <a href=\"https://github.com/apache/incubator-linkis/\" target=\"_blank\" rel=\"noopener noreferrer\">Apache Linkis</a> and fork it."
+        desc: "The Linkis team is comprised of Members and Contributors. Members have direct access to the source of Linkis project and actively evolve the code-base. Contributors improve the project through submission of patches and suggestions to the Members. The number of Contributors to the project is unbounded. All contributions to Linkis are greatly appreciated, whether for trivial cleanups, big new features or other material rewards.<br>If you want to contribute, you can go direct [...]
     },
     list: [
         {
diff --git a/src/pages/team/teamdata_zh.js b/src/pages/team/teamdata_zh.js
index 75a96a5..c14dd7a 100644
--- a/src/pages/team/teamdata_zh.js
+++ b/src/pages/team/teamdata_zh.js
@@ -1,7 +1,6 @@
 const data = {
     info: {
-        desc: "Linkis 团队由成员和贡献者组成。 成员可以直接访问 Linkis 项目的源代码并积极开发代码库。 贡献者通过提交补丁和向成员提供建议来改进项目。 项目的贡献者数量不限。 非常感谢对 Linkis 的所有贡献,无论是琐碎的修改或清理、重大的新特性新功能,还是其他的物质奖励。",
-        tip:  '如果你想参与贡献,可以直接去<a href="https://github.com/apache/incubator-linkis" target="_blank" rel="noopener noreferrer" >Apache Linkis</a> 并fork.'
+        desc: "Linkis 团队由成员和贡献者组成。 成员可以直接访问 Linkis 项目的源代码并积极开发代码库。 贡献者通过提交补丁和向成员提供建议来改进项目。 项目的贡献者数量不限。 非常感谢对 Linkis 的所有贡献,无论是琐碎的修改或清理、重大的新特性新功能,还是其他的物质奖励。<br>如果你想参与贡献,可以直接去<a class=\"link\" href=\"https://github.com/apache/incubator-linkis\" target=\"_blank\" rel=\"noopener noreferrer\" >Apache Linkis</a> 并fork."
     },
     list: [
         {
diff --git a/src/style/base.less b/src/style/base.less
index 7e6cd4c..dd3f237 100644
--- a/src/style/base.less
+++ b/src/style/base.less
@@ -135,5 +135,12 @@ a:visited {
   .normal-desc{
     color: @enhance-color;
     font-weight: 400;
+    .link{
+      color: @active-color;
+      text-decoration: underline;
+      &:hover{
+        text-decoration: none;
+      }
+    }
   }
 }

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org


[incubator-linkis-website] 40/50: init for asf-staging

Posted by pe...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit 189eb6c60111d598defe5795b5b32f3c59416d6c
Author: casionone <ca...@gmail.com>
AuthorDate: Thu Oct 21 14:36:16 2021 +0800

    init for asf-staging
---
 .vscode/extensions.json                            |   3 -
 Linkis-Doc-master/LANGS.md                         |   2 -
 Linkis-Doc-master/README.md                        | 114 ----
 Linkis-Doc-master/README_CN.md                     | 105 ----
 .../en_US/API_Documentations/JDBC_API_Document.md  |  45 --
 ...sk_submission_and_execution_RestAPI_document.md | 170 ------
 .../en_US/API_Documentations/Login_API.md          | 125 ----
 .../en_US/API_Documentations/README.md             |   8 -
 .../EngineConn/README.md                           |  99 ----
 .../EngineConnManager/Images/ECM-01.png            | Bin 34340 -> 0 bytes
 .../EngineConnManager/Images/ECM-02.png            | Bin 25340 -> 0 bytes
 .../EngineConnManager/README.md                    |  45 --
 .../EngineConnPlugin/README.md                     |  68 ---
 .../LinkisManager/AppManager.md                    |  33 --
 .../LinkisManager/LabelManager.md                  |  38 --
 .../LinkisManager/README.md                        |  41 --
 .../LinkisManager/ResourceManager.md               | 132 -----
 .../Computation_Governance_Services/README.md      |  40 --
 .../DifferenceBetween1.0&0.x.md                    |  50 --
 .../How_to_add_an_EngineConn.md                    | 105 ----
 ...submission_preparation_and_execution_process.md | 138 -----
 .../Microservice_Governance_Services/Gateway.md    |  34 --
 .../Microservice_Governance_Services/README.md     |  32 -
 .../Public_Enhancement_Services/BML.md             |  93 ---
 .../ContextService/ContextService_Cache.md         |  95 ---
 .../ContextService/ContextService_Client.md        |  61 --
 .../ContextService/ContextService_HighAvailable.md |  86 ---
 .../ContextService/ContextService_Listener.md      |  33 --
 .../ContextService/ContextService_Persistence.md   |   8 -
 .../ContextService/ContextService_Search.md        | 127 ----
 .../ContextService/ContextService_Service.md       |  53 --
 .../ContextService/README.md                       | 123 ----
 .../Public_Enhancement_Services/PublicService.md   |  34 --
 .../Public_Enhancement_Services/README.md          |  91 ---
 .../en_US/Architecture_Documents/README.md         |  18 -
 .../Deployment_Documents/Cluster_Deployment.md     |  98 ----
 .../EngineConnPlugin_installation_document.md      |  82 ---
 ...75\262\345\276\256\346\234\215\345\212\241.png" | Bin 130148 -> 0 bytes
 .../Installation_Hierarchical_Structure.md         | 198 -------
 .../Deployment_Documents/Quick_Deploy_Linkis1.0.md | 246 --------
 .../en_US/Development_Documents/Contributing.md    | 195 -------
 .../Development_Specification/API.md               | 143 -----
 .../Development_Specification/Concurrent.md        |  17 -
 .../Development_Specification/Exception_Catch.md   |   9 -
 .../Development_Specification/Exception_Throws.md  |  52 --
 .../Development_Specification/Log.md               |  13 -
 .../Development_Specification/Path_Usage.md        |  15 -
 .../Development_Specification/README.md            |   9 -
 .../Linkis_Compilation_Document.md                 | 135 -----
 .../Linkis_Compile_and_Package.md                  | 155 -----
 .../en_US/Development_Documents/Linkis_DEBUG.md    | 141 -----
 .../New_EngineConn_Development.md                  |  77 ---
 .../Hive_User_Manual.md                            |  81 ---
 .../JDBC_User_Manual.md                            |  53 --
 .../Python_User_Manual.md                          |  61 --
 .../en_US/Engine_Usage_Documentations/README.md    |  25 -
 .../Shell_User_Manual.md                           |  55 --
 .../Spark_User_Manual.md                           |  91 ---
 .../add_an_EngineConn_flow_chart.png               | Bin 59893 -> 0 bytes
 .../Architecture/EngineConn/engineconn-01.png      | Bin 157753 -> 0 bytes
 .../Gateway/gateway_server_dispatcher.png          | Bin 83743 -> 0 bytes
 .../Architecture/Gateway/gateway_server_global.png | Bin 85272 -> 0 bytes
 .../Architecture/Gateway/gatway_websocket.png      | Bin 37769 -> 0 bytes
 .../execution.png                                  | Bin 31078 -> 0 bytes
 .../orchestrate.png                                | Bin 31095 -> 0 bytes
 .../overall.png                                    | Bin 231192 -> 0 bytes
 .../physical_tree.png                              | Bin 79471 -> 0 bytes
 .../result_acquisition.png                         | Bin 41007 -> 0 bytes
 .../submission.png                                 | Bin 12946 -> 0 bytes
 .../LabelManager/label_manager_builder.png         | Bin 62978 -> 0 bytes
 .../LabelManager/label_manager_global.png          | Bin 14988 -> 0 bytes
 .../LabelManager/label_manager_scorer.png          | Bin 72977 -> 0 bytes
 .../Linkis0.X-NewEngine-architecture.png           | Bin 244826 -> 0 bytes
 .../Architecture/Linkis0.X-services-list.png       | Bin 66821 -> 0 bytes
 .../Linkis1.0-EngineConn-architecture.png          | Bin 157753 -> 0 bytes
 .../Linkis1.0-NewEngine-architecture.png           | Bin 26523 -> 0 bytes
 .../Images/Architecture/Linkis1.0-architecture.png | Bin 212362 -> 0 bytes
 .../Linkis1.0-newEngine-initialization.png         | Bin 48313 -> 0 bytes
 .../Architecture/Linkis1.0-services-list.png       | Bin 85890 -> 0 bytes
 .../Architecture/PublicEnhencementArchitecture.png | Bin 47158 -> 0 bytes
 .../ContextService/linkis-contextservice-01.png    | Bin 22692 -> 0 bytes
 .../ContextService/linkis-contextservice-02.png    | Bin 10655 -> 0 bytes
 .../linkis-contextservice-cache-01.png             | Bin 11881 -> 0 bytes
 .../linkis-contextservice-cache-02.png             | Bin 23902 -> 0 bytes
 .../linkis-contextservice-cache-03.png             | Bin 109334 -> 0 bytes
 .../linkis-contextservice-cache-04.png             | Bin 36161 -> 0 bytes
 .../linkis-contextservice-cache-05.png             | Bin 2265 -> 0 bytes
 .../linkis-contextservice-client-01.png            | Bin 54438 -> 0 bytes
 .../linkis-contextservice-client-02.png            | Bin 93036 -> 0 bytes
 .../linkis-contextservice-client-03.png            | Bin 34839 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-01.png | Bin 38439 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-02.png | Bin 21982 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-03.png | Bin 91788 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-04.png | Bin 40733 -> 0 bytes
 .../linkis-contextservice-listener-01.png          | Bin 24414 -> 0 bytes
 .../linkis-contextservice-listener-02.png          | Bin 46152 -> 0 bytes
 .../linkis-contextservice-listener-03.png          | Bin 32597 -> 0 bytes
 .../linkis-contextservice-persistence-01.png       | Bin 198797 -> 0 bytes
 .../linkis-contextservice-search-01.png            | Bin 33731 -> 0 bytes
 .../linkis-contextservice-search-02.png            | Bin 26768 -> 0 bytes
 .../linkis-contextservice-search-03.png            | Bin 33312 -> 0 bytes
 .../linkis-contextservice-search-04.png            | Bin 25192 -> 0 bytes
 .../linkis-contextservice-search-05.png            | Bin 24757 -> 0 bytes
 .../linkis-contextservice-search-06.png            | Bin 29923 -> 0 bytes
 .../linkis-contextservice-search-07.png            | Bin 30013 -> 0 bytes
 .../linkis-contextservice-service-01.png           | Bin 56235 -> 0 bytes
 .../linkis-contextservice-service-02.png           | Bin 73463 -> 0 bytes
 .../linkis-contextservice-service-03.png           | Bin 23477 -> 0 bytes
 .../linkis-contextservice-service-04.png           | Bin 27387 -> 0 bytes
 .../en_US/Images/Architecture/bml-02.png           | Bin 55227 -> 0 bytes
 .../Architecture/linkis-engineConnPlugin-01.png    | Bin 21864 -> 0 bytes
 .../en_US/Images/Architecture/linkis-intro-01.png  | Bin 413878 -> 0 bytes
 .../en_US/Images/Architecture/linkis-intro-02.png  | Bin 355186 -> 0 bytes
 .../Architecture/linkis-microservice-gov-01.png    | Bin 109909 -> 0 bytes
 .../Architecture/linkis-microservice-gov-03.png    | Bin 83457 -> 0 bytes
 .../Architecture/linkis-publicService-01.png       | Bin 62443 -> 0 bytes
 .../en_US/Images/EngineUsage/hive-config.png       | Bin 86864 -> 0 bytes
 .../en_US/Images/EngineUsage/hive-run.png          | Bin 94294 -> 0 bytes
 .../en_US/Images/EngineUsage/jdbc-conf.png         | Bin 91609 -> 0 bytes
 .../en_US/Images/EngineUsage/jdbc-run.png          | Bin 56438 -> 0 bytes
 .../en_US/Images/EngineUsage/pyspakr-run.png       | Bin 124979 -> 0 bytes
 .../en_US/Images/EngineUsage/python-config.png     | Bin 92997 -> 0 bytes
 .../en_US/Images/EngineUsage/python-run.png        | Bin 89641 -> 0 bytes
 .../en_US/Images/EngineUsage/queue-set.png         | Bin 93935 -> 0 bytes
 .../en_US/Images/EngineUsage/scala-run.png         | Bin 125060 -> 0 bytes
 .../en_US/Images/EngineUsage/shell-run.png         | Bin 209553 -> 0 bytes
 .../en_US/Images/EngineUsage/spark-conf.png        | Bin 99930 -> 0 bytes
 .../en_US/Images/EngineUsage/sparksql-run.png      | Bin 121699 -> 0 bytes
 .../en_US/Images/EngineUsage/workflow.png          | Bin 151481 -> 0 bytes
 .../en_US/Images/Linkis_1.0_architecture.png       | Bin 316746 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/Q&A.png      | Bin 161638 -> 0 bytes
 .../Tuning_and_Troubleshooting/code-fix-01.png     | Bin 199523 -> 0 bytes
 .../Tuning_and_Troubleshooting/db-config-01.png    | Bin 391789 -> 0 bytes
 .../Tuning_and_Troubleshooting/db-config-02.png    | Bin 60334 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-01.png | Bin 6168 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-02.png | Bin 62496 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-03.png | Bin 32875 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-04.png | Bin 111758 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-05.png | Bin 52040 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-06.png | Bin 63668 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-07.png | Bin 316176 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-08.png | Bin 27722 -> 0 bytes
 .../Tuning_and_Troubleshooting/hive-config-01.png  | Bin 76327 -> 0 bytes
 .../linkis-exception-01.png                        | Bin 1199628 -> 0 bytes
 .../linkis-exception-02.png                        | Bin 1366293 -> 0 bytes
 .../linkis-exception-03.png                        | Bin 646836 -> 0 bytes
 .../linkis-exception-04.png                        | Bin 2965676 -> 0 bytes
 .../linkis-exception-05.png                        | Bin 454949 -> 0 bytes
 .../linkis-exception-06.png                        | Bin 869492 -> 0 bytes
 .../linkis-exception-07.png                        | Bin 2249882 -> 0 bytes
 .../linkis-exception-08.png                        | Bin 1191728 -> 0 bytes
 .../linkis-exception-09.png                        | Bin 1008341 -> 0 bytes
 .../linkis-exception-10.png                        | Bin 322110 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-01.png    | Bin 115010 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-02.png    | Bin 576911 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-03.png    | Bin 654609 -> 0 bytes
 .../searching_keywords.png                         | Bin 102094 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-01.png  | Bin 74682 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-02.png  | Bin 330735 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-03.png  | Bin 1624375 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-04.png  | Bin 803920 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-05.png  | Bin 179543 -> 0 bytes
 .../Tunning_And_Troubleshooting/debug-01.png       | Bin 6168 -> 0 bytes
 .../Tunning_And_Troubleshooting/debug-02.png       | Bin 62496 -> 0 bytes
 .../Tunning_And_Troubleshooting/debug-03.png       | Bin 32875 -> 0 bytes
 .../Tunning_And_Troubleshooting/debug-04.png       | Bin 111758 -> 0 bytes
 .../Tunning_And_Troubleshooting/debug-05.png       | Bin 52040 -> 0 bytes
 .../Tunning_And_Troubleshooting/debug-06.png       | Bin 63668 -> 0 bytes
 .../Tunning_And_Troubleshooting/debug-07.png       | Bin 316176 -> 0 bytes
 .../Tunning_And_Troubleshooting/debug-08.png       | Bin 27722 -> 0 bytes
 .../deployment/Linkis1.0_combined_eureka.png       | Bin 134418 -> 0 bytes
 .../en_US/Images/wedatasphere_contact_01.png       | Bin 217762 -> 0 bytes
 .../en_US/Images/wedatasphere_stack_Linkis.png     | Bin 203466 -> 0 bytes
 .../Tuning_and_Troubleshooting/Configuration.md    | 217 -------
 .../en_US/Tuning_and_Troubleshooting/Q&A.md        | 255 --------
 .../en_US/Tuning_and_Troubleshooting/README.md     |  98 ----
 .../en_US/Tuning_and_Troubleshooting/Tuning.md     |  61 --
 .../Linkis_Upgrade_from_0.x_to_1.0_guide.md        |  73 ---
 .../en_US/Upgrade_Documents/README.md              |   5 -
 .../en_US/User_Manual/How_To_Use_Linkis.md         |  29 -
 .../en_US/User_Manual/Linkis1.0_User_Manual.md     | 400 -------------
 .../en_US/User_Manual/LinkisCli_Usage_document.md  | 191 ------
 .../User_Manual/Linkis_Console_User_Manual.md      | 120 ----
 Linkis-Doc-master/en_US/User_Manual/README.md      |   8 -
 ...\350\241\214RestAPI\346\226\207\346\241\243.md" | 171 ------
 .../zh_CN/API_Documentations/Login_API.md          | 131 -----
 .../zh_CN/API_Documentations/README.md             |   8 -
 ...350\241\214JDBC_API\346\226\207\346\241\243.md" |  46 --
 .../Commons/messagescheduler.md                    |  15 -
 .../zh_CN/Architecture_Documents/Commons/rpc.md    |  17 -
 .../EngineConn/README.md                           |  98 ----
 .../ECM\346\236\266\346\236\204\345\233\276.png"   | Bin 34340 -> 0 bytes
 ...57\267\346\261\202\346\265\201\347\250\213.png" | Bin 25340 -> 0 bytes
 .../EngineConnManager/README.md                    |  49 --
 .../EngineConnPlugin/README.md                     |  71 ---
 .../Entrance/Entrance.md                           |  26 -
 .../LinkisClient/README.md                         |  35 --
 .../LinkisManager/AppManager.md                    |  45 --
 .../LinkisManager/LabelManager.md                  |  40 --
 .../LinkisManager/README.md                        |  74 ---
 .../LinkisManager/ResourceManager.md               | 145 -----
 .../Computation_Governance_Services/README.md      |  66 ---
 ...226\260\345\242\236\346\265\201\347\250\213.md" | 111 ----
 ...211\247\350\241\214\346\265\201\347\250\213.md" | 165 ------
 ...214\272\345\210\253\347\256\200\350\277\260.md" |  98 ----
 .../Microservice_Governance_Services/Gateway.md    |  30 -
 .../Microservice_Governance_Services/README.md     |  23 -
 .../Computation_Orchestrator_architecture.md       |  18 -
 ...16\245\345\217\243\345\222\214\347\261\273.png" | Bin 27266 -> 0 bytes
 ...72\244\344\272\222\346\265\201\347\250\213.png" | Bin 30134 -> 0 bytes
 ...16\245\345\217\243\345\222\214\347\261\273.png" | Bin 162100 -> 0 bytes
 .../Orchestrator/Orchestrator_CheckRuler.md        |  27 -
 .../Orchestrator/Orchestrator_ECMP_architecture.md |  32 -
 .../Orchestrator_Execution_architecture_doc.md     |  19 -
 .../Orchestrator_Operation_architecture_doc.md     |  26 -
 .../Orchestrator_Reheater_architecture.md          |  12 -
 .../Orchestrator_Transform_architecture.md         |  12 -
 .../Orchestrator/Orchestrator_architecture_doc.md  | 113 ----
 .../Architecture_Documents/Orchestrator/README.md  |  55 --
 .../Public_Enhancement_Services/BML.md             |  94 ---
 .../ContextService/ContextService_Cache.md         |  95 ---
 .../ContextService/ContextService_Client.md        |  61 --
 .../ContextService/ContextService_HighAvailable.md |  86 ---
 .../ContextService/ContextService_Listener.md      |  33 --
 .../ContextService/ContextService_Persistence.md   |   8 -
 .../ContextService/ContextService_Search.md        | 127 ----
 .../ContextService/ContextService_Service.md       |  55 --
 .../ContextService/README.md                       | 124 ----
 .../Public_Enhancement_Services/DataSource.md      |   1 -
 .../Public_Enhancement_Services/PublicService.md   |  31 -
 .../Public_Enhancement_Services/README.md          |  91 ---
 .../zh_CN/Architecture_Documents/README.md         |  24 -
 .../Deployment_Documents/Cluster_Deployment.md     | 100 ----
 ...256\211\350\243\205\346\226\207\346\241\243.md" | 106 ----
 ...75\262\345\276\256\346\234\215\345\212\241.png" | Bin 130148 -> 0 bytes
 .../Installation_Hierarchical_Structure.md         | 186 ------
 .../zh_CN/Deployment_Documents/README.md           |   1 -
 ...256\211\350\243\205\346\226\207\346\241\243.md" | 110 ----
 ...51\200\237\351\203\250\347\275\262Linkis1.0.md" | 256 --------
 .../zh_CN/Development_Documents/Contributing.md    | 206 -------
 .../zh_CN/Development_Documents/DEBUG_LINKIS.md    | 113 ----
 .../Development_Specification/API.md               |  72 ---
 .../Development_Specification/Concurrent.md        |   9 -
 .../Development_Specification/Exception_Catch.md   |   9 -
 .../Development_Specification/Exception_Throws.md  |  30 -
 .../Development_Specification/Log.md               |  13 -
 .../Development_Specification/Path_Usage.md        |   8 -
 .../Development_Specification/README.md            |  12 -
 ...274\226\350\257\221\346\226\207\346\241\243.md" | 160 -----
 .../New_EngineConn_Development.md                  |  79 ---
 .../zh_CN/Development_Documents/README.md          |   1 -
 .../zh_CN/Development_Documents/Web/Build.md       |  84 ---
 .../zh_CN/Development_MEETUP/Phase_One/README.md   |  56 --
 .../zh_CN/Development_MEETUP/Phase_One/chapter1.md |   1 -
 .../zh_CN/Development_MEETUP/Phase_One/chapter2.md |   1 -
 .../Development_MEETUP/Phase_Two/Images/Q&A.png    | Bin 161638 -> 0 bytes
 .../Development_MEETUP/Phase_Two/Images/issue.png  | Bin 102094 -> 0 bytes
 .../Phase_Two/Images/\345\217\214\346\264\273.png" | Bin 130148 -> 0 bytes
 .../Images2/0ca28635de253f245743fbf0a7cfe165.png   | Bin 98316 -> 0 bytes
 .../Images2/146a58addcacbc560a33604b00636dee.png   | Bin 44890 -> 0 bytes
 .../Images2/1730acb1c4ff58a055fa71324e5c7f2c.png   | Bin 95491 -> 0 bytes
 .../Images2/1d31b398318acbd862f20ac05decbce9.png   | Bin 7741 -> 0 bytes
 .../Images2/1d8f043dae5afdf07371ad31b06bad6e.png   | Bin 74243 -> 0 bytes
 .../Images2/232983a712a949196159f0aeab7de7f5.png   | Bin 150575 -> 0 bytes
 .../Images2/2767bac623d10bf45033cf9fdd8d197f.png   | Bin 120905 -> 0 bytes
 .../Images2/335dabbf46b5af11e494cdd1be2c32a1.png   | Bin 118394 -> 0 bytes
 .../Images2/491e9a0fbd5b0121f228e0f7938cf168.png   | Bin 120419 -> 0 bytes
 .../Images2/781914abed8ec4955cac520eb0a1be7e.png   | Bin 770399 -> 0 bytes
 .../Images2/7b8685204636771776605bab99b08e8f.png   | Bin 82550 -> 0 bytes
 .../Images2/7cbe7cd81ce2212883741dd9b62dad18.png   | Bin 36588 -> 0 bytes
 .../Images2/8576fe8054c072a7fee53d98eeefa004.png   | Bin 39623 -> 0 bytes
 .../Images2/87ef54ccaa6b96abc30e612636bb2e90.png   | Bin 103943 -> 0 bytes
 .../Images2/9693ded0c6a9c32cb1ff33713e5d3864.png   | Bin 54885 -> 0 bytes
 .../Images2/9c254ec33125eb0ab50a6bcc0e95a18a.png   | Bin 145675 -> 0 bytes
 .../Images2/a0fb7e3474dff5c22fb3c230f73fa6f6.png   | Bin 55052 -> 0 bytes
 .../Images2/b68f441d7ac6b4814c048d35cebbb25d.png   | Bin 117177 -> 0 bytes
 .../Images2/b7feb36a0322b002f9f85f0a8003dcc1.png   | Bin 169905 -> 0 bytes
 .../Images2/ba90e28a78375103c4890cd448818ab3.png   | Bin 132653 -> 0 bytes
 .../Images2/c3f5ac1723ba9823084f529f5384440d.png   | Bin 21078 -> 0 bytes
 .../Images2/cd3ea323b238158c8a3de8acc8ec0a3f.png   | Bin 20051 -> 0 bytes
 .../Images2/d0fe37b4aa34b0cea9e87247b7b17943.png   | Bin 115496 -> 0 bytes
 .../Images2/d1b4759745056add53a32a76d3699109.png   | Bin 23378 -> 0 bytes
 .../Images2/d9bab9306cc28ecdf8d3679ecfc224d4.png   | Bin 97351 -> 0 bytes
 .../Images2/da0cf9cb7b27dac266435b5f6ad1cd82.png   | Bin 45877 -> 0 bytes
 .../Images2/de301f8f21c1735c5e018188d685ad74.png   | Bin 53369 -> 0 bytes
 .../Images2/e7e2a98ce1f03d228c7c2d782b076d53.png   | Bin 81483 -> 0 bytes
 .../Images2/f395c9cc338d85e258485658290bf365.png   | Bin 43688 -> 0 bytes
 .../Images2/f6fa083cab060a5adc9d483b37d040f5.png   | Bin 60331 -> 0 bytes
 .../Images2/fb952c266ce9a8db9b9036a602e222a7.png   | Bin 131953 -> 0 bytes
 .../zh_CN/Development_MEETUP/Phase_Two/README.md   |  58 --
 .../zh_CN/Development_MEETUP/Phase_Two/chapter1.md | 371 ------------
 .../zh_CN/Development_MEETUP/Phase_Two/chapter2.md | 251 --------
 .../zh_CN/Development_MEETUP/README.md             |   1 -
 .../ElasticSearch_User_Manual.md                   |   1 -
 .../Hive_User_Manual.md                            |  81 ---
 .../JDBC_User_Manual.md                            |  53 --
 .../MLSQL_User_Manual.md                           |   1 -
 .../Presto_User_Manual.md                          |   1 -
 .../Python_User_Manual.md                          |  61 --
 .../zh_CN/Engine_Usage_Documentations/README.md    |  25 -
 .../Shell_User_Manual.md                           |  57 --
 .../Spark_User_Manual.md                           |  91 ---
 .../zh_CN/Images/Architecture/AppManager-02.png    | Bin 701283 -> 0 bytes
 .../zh_CN/Images/Architecture/AppManager-03.png    | Bin 69489 -> 0 bytes
 .../Commons/linkis-message-scheduler.png           | Bin 26987 -> 0 bytes
 .../Images/Architecture/Commons/linkis-rpc.png     | Bin 23403 -> 0 bytes
 .../Architecture/EngineConn/engineconn-01.png      | Bin 157753 -> 0 bytes
 .../EngineConnPlugin/engine_conn_plugin_cycle.png  | Bin 49326 -> 0 bytes
 .../EngineConnPlugin/engine_conn_plugin_global.png | Bin 32292 -> 0 bytes
 .../EngineConnPlugin/engine_conn_plugin_load.png   | Bin 74821 -> 0 bytes
 ...26\260\345\242\236\346\265\201\347\250\213.png" | Bin 59893 -> 0 bytes
 .../Gateway/gateway_server_dispatcher.png          | Bin 83743 -> 0 bytes
 .../Architecture/Gateway/gateway_server_global.png | Bin 85272 -> 0 bytes
 .../Architecture/Gateway/gatway_websocket.png      | Bin 37769 -> 0 bytes
 .../Physical\346\240\221.png"                      | Bin 79471 -> 0 bytes
 ...56\265\346\265\201\347\250\213\345\233\276.png" | Bin 31078 -> 0 bytes
 ...56\265\346\265\201\347\250\213\345\233\276.png" | Bin 12946 -> 0 bytes
 ...16\267\345\217\226\346\265\201\347\250\213.png" | Bin 41007 -> 0 bytes
 ...16\222\346\265\201\347\250\213\345\233\276.png" | Bin 31095 -> 0 bytes
 ...75\223\346\265\201\347\250\213\345\233\276.png" | Bin 231192 -> 0 bytes
 .../LabelManager/label_manager_builder.png         | Bin 62978 -> 0 bytes
 .../LabelManager/label_manager_global.png          | Bin 14988 -> 0 bytes
 .../LabelManager/label_manager_scorer.png          | Bin 72977 -> 0 bytes
 .../Images/Architecture/Linkis1.0-architecture.png | Bin 221751 -> 0 bytes
 .../Architecture/LinkisManager/AppManager-01.png   | Bin 69489 -> 0 bytes
 .../Architecture/LinkisManager/LabelManager-01.png | Bin 39221 -> 0 bytes
 .../LinkisManager/LinkisManager-01.png             | Bin 183082 -> 0 bytes
 .../LinkisManager/ResourceManager-01.png           | Bin 71086 -> 0 bytes
 ...cement\346\236\266\346\236\204\345\233\276.png" | Bin 47158 -> 0 bytes
 .../ContextService/linkis-contextservice-01.png    | Bin 22692 -> 0 bytes
 .../ContextService/linkis-contextservice-02.png    | Bin 10655 -> 0 bytes
 .../linkis-contextservice-cache-01.png             | Bin 11881 -> 0 bytes
 .../linkis-contextservice-cache-02.png             | Bin 23902 -> 0 bytes
 .../linkis-contextservice-cache-03.png             | Bin 109334 -> 0 bytes
 .../linkis-contextservice-cache-04.png             | Bin 36161 -> 0 bytes
 .../linkis-contextservice-cache-05.png             | Bin 2265 -> 0 bytes
 .../linkis-contextservice-client-01.png            | Bin 54438 -> 0 bytes
 .../linkis-contextservice-client-02.png            | Bin 93036 -> 0 bytes
 .../linkis-contextservice-client-03.png            | Bin 34839 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-01.png | Bin 38439 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-02.png | Bin 21982 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-03.png | Bin 91788 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-04.png | Bin 40733 -> 0 bytes
 .../linkis-contextservice-listener-01.png          | Bin 24414 -> 0 bytes
 .../linkis-contextservice-listener-02.png          | Bin 46152 -> 0 bytes
 .../linkis-contextservice-listener-03.png          | Bin 32597 -> 0 bytes
 .../linkis-contextservice-persistence-01.png       | Bin 198797 -> 0 bytes
 .../linkis-contextservice-search-01.png            | Bin 33731 -> 0 bytes
 .../linkis-contextservice-search-02.png            | Bin 26768 -> 0 bytes
 .../linkis-contextservice-search-03.png            | Bin 33312 -> 0 bytes
 .../linkis-contextservice-search-04.png            | Bin 25192 -> 0 bytes
 .../linkis-contextservice-search-05.png            | Bin 24757 -> 0 bytes
 .../linkis-contextservice-search-06.png            | Bin 29923 -> 0 bytes
 .../linkis-contextservice-search-07.png            | Bin 30013 -> 0 bytes
 .../linkis-contextservice-service-01.png           | Bin 56235 -> 0 bytes
 .../linkis-contextservice-service-02.png           | Bin 73463 -> 0 bytes
 .../linkis-contextservice-service-03.png           | Bin 23477 -> 0 bytes
 .../linkis-contextservice-service-04.png           | Bin 27387 -> 0 bytes
 .../zh_CN/Images/Architecture/bml-01.png           | Bin 78801 -> 0 bytes
 .../zh_CN/Images/Architecture/bml-02.png           | Bin 55227 -> 0 bytes
 .../zh_CN/Images/Architecture/linkis-client-01.png | Bin 88633 -> 0 bytes
 .../Architecture/linkis-computation-gov-01.png     | Bin 89527 -> 0 bytes
 .../Architecture/linkis-computation-gov-02.png     | Bin 179368 -> 0 bytes
 .../Architecture/linkis-engineConnPlugin-01.png    | Bin 21864 -> 0 bytes
 .../Images/Architecture/linkis-entrance-01.png     | Bin 33102 -> 0 bytes
 .../zh_CN/Images/Architecture/linkis-intro-01.jpg  | Bin 341150 -> 0 bytes
 .../zh_CN/Images/Architecture/linkis-intro-02.jpg  | Bin 289769 -> 0 bytes
 .../Architecture/linkis-microservice-gov-01.png    | Bin 89404 -> 0 bytes
 .../Architecture/linkis-microservice-gov-03.png    | Bin 60074 -> 0 bytes
 .../linkis-computation-orchestrator-01.png         | Bin 53527 -> 0 bytes
 .../linkis-computation-orchestrator-02.png         | Bin 77543 -> 0 bytes
 .../orchestrator/execution/execution.png           | Bin 29487 -> 0 bytes
 .../orchestrator/execution/execution01.png         | Bin 55090 -> 0 bytes
 .../linkis_orchestrator_architecture.png           | Bin 51935 -> 0 bytes
 .../orchestrator/operation/operation_class.png     | Bin 36916 -> 0 bytes
 .../orchestrator/overall/Orchestrator01.png        | Bin 38900 -> 0 bytes
 .../orchestrator/overall/Orchestrator_Logical.png  | Bin 46510 -> 0 bytes
 .../orchestrator/overall/Orchestrator_Physical.png | Bin 52228 -> 0 bytes
 .../orchestrator/overall/Orchestrator_arc.png      | Bin 32345 -> 0 bytes
 .../orchestrator/overall/Orchestrator_ast.png      | Bin 24733 -> 0 bytes
 .../orchestrator/overall/Orchestrator_cache.png    | Bin 96643 -> 0 bytes
 .../orchestrator/overall/Orchestrator_command.png  | Bin 29349 -> 0 bytes
 .../overall/Orchestrator_computation.png           | Bin 64070 -> 0 bytes
 .../orchestrator/overall/Orchestrator_progress.png | Bin 92726 -> 0 bytes
 .../orchestrator/overall/Orchestrator_reheat.png   | Bin 82286 -> 0 bytes
 .../overall/Orchestrator_transication.png          | Bin 63174 -> 0 bytes
 .../orchestrator/overall/orchestrator_entity.png   | Bin 29307 -> 0 bytes
 .../reheater/linkis-orchestrator-reheater-01.png   | Bin 22631 -> 0 bytes
 .../transform/linkis-orchestrator-transform-01.png | Bin 21241 -> 0 bytes
 .../zh_CN/Images/Architecture/rm-01.png            | Bin 183082 -> 0 bytes
 .../zh_CN/Images/Architecture/rm-02.png            | Bin 71086 -> 0 bytes
 .../zh_CN/Images/Architecture/rm-03.png            | Bin 52466 -> 0 bytes
 .../zh_CN/Images/Architecture/rm-04.png            | Bin 36324 -> 0 bytes
 .../zh_CN/Images/Architecture/rm-05.png            | Bin 34066 -> 0 bytes
 .../zh_CN/Images/Architecture/rm-06.png            | Bin 44105 -> 0 bytes
 .../zh_CN/Images/EngineUsage/hive-config.png       | Bin 127024 -> 0 bytes
 .../zh_CN/Images/EngineUsage/hive-run.png          | Bin 94294 -> 0 bytes
 .../zh_CN/Images/EngineUsage/jdbc-conf.png         | Bin 128381 -> 0 bytes
 .../zh_CN/Images/EngineUsage/jdbc-run.png          | Bin 56438 -> 0 bytes
 .../zh_CN/Images/EngineUsage/pyspakr-run.png       | Bin 124979 -> 0 bytes
 .../zh_CN/Images/EngineUsage/python-config.png     | Bin 129842 -> 0 bytes
 .../zh_CN/Images/EngineUsage/python-run.png        | Bin 89641 -> 0 bytes
 .../zh_CN/Images/EngineUsage/queue-set.png         | Bin 115340 -> 0 bytes
 .../zh_CN/Images/EngineUsage/scala-run.png         | Bin 125060 -> 0 bytes
 .../zh_CN/Images/EngineUsage/shell-run.png         | Bin 209553 -> 0 bytes
 .../zh_CN/Images/EngineUsage/spark-conf.png        | Bin 178501 -> 0 bytes
 .../zh_CN/Images/EngineUsage/sparksql-run.png      | Bin 121699 -> 0 bytes
 .../zh_CN/Images/EngineUsage/workflow.png          | Bin 151481 -> 0 bytes
 .../zh_CN/Images/Introduction/introduction.png     | Bin 90686 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/Q&A.png      | Bin 161638 -> 0 bytes
 .../Tuning_and_Troubleshooting/code-fix-01.png     | Bin 199523 -> 0 bytes
 .../Tuning_and_Troubleshooting/db-config-01.png    | Bin 391789 -> 0 bytes
 .../Tuning_and_Troubleshooting/db-config-02.png    | Bin 60334 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-01.png | Bin 6168 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-02.png | Bin 62496 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-03.png | Bin 32875 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-04.png | Bin 111758 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-05.png | Bin 52040 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-06.png | Bin 63668 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-07.png | Bin 316176 -> 0 bytes
 .../Images/Tuning_and_Troubleshooting/debug-08.png | Bin 27722 -> 0 bytes
 .../Tuning_and_Troubleshooting/hive-config-01.png  | Bin 76327 -> 0 bytes
 .../linkis-exception-01.png                        | Bin 1199628 -> 0 bytes
 .../linkis-exception-02.png                        | Bin 1366293 -> 0 bytes
 .../linkis-exception-03.png                        | Bin 646836 -> 0 bytes
 .../linkis-exception-04.png                        | Bin 2965676 -> 0 bytes
 .../linkis-exception-05.png                        | Bin 454949 -> 0 bytes
 .../linkis-exception-06.png                        | Bin 869492 -> 0 bytes
 .../linkis-exception-07.png                        | Bin 2249882 -> 0 bytes
 .../linkis-exception-08.png                        | Bin 1191728 -> 0 bytes
 .../linkis-exception-09.png                        | Bin 1008341 -> 0 bytes
 .../linkis-exception-10.png                        | Bin 322110 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-01.png    | Bin 115010 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-02.png    | Bin 576911 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-03.png    | Bin 654609 -> 0 bytes
 .../searching_keywords.png                         | Bin 102094 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-01.png  | Bin 74682 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-02.png  | Bin 330735 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-03.png  | Bin 1624375 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-04.png  | Bin 803920 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-05.png  | Bin 179543 -> 0 bytes
 Linkis-Doc-master/zh_CN/Images/after_linkis_cn.png | Bin 645519 -> 0 bytes
 .../zh_CN/Images/before_linkis_cn.png              | Bin 332201 -> 0 bytes
 .../deployment/Linkis1.0_combined_eureka.png       | Bin 134418 -> 0 bytes
 Linkis-Doc-master/zh_CN/README.md                  |  87 ---
 Linkis-Doc-master/zh_CN/SUMMARY.md                 |  69 ---
 .../Tuning_and_Troubleshooting/Configuration.md    | 220 -------
 .../zh_CN/Tuning_and_Troubleshooting/Q&A.md        | 257 --------
 .../zh_CN/Tuning_and_Troubleshooting/README.md     | 112 ----
 .../zh_CN/Tuning_and_Troubleshooting/Tuning.md     |  50 --
 ...\247\345\210\2601.0\346\214\207\345\215\227.md" |  73 ---
 .../zh_CN/Upgrade_Documents/README.md              |   6 -
 .../zh_CN/User_Manual/How_To_Use_Linkis.md         |  20 -
 ...74\225\346\223\216\344\277\241\346\201\257.png" | Bin 89529 -> 0 bytes
 ...56\241\347\220\206\347\225\214\351\235\242.png" | Bin 43765 -> 0 bytes
 ...74\226\350\276\221\347\225\214\351\235\242.png" | Bin 64470 -> 0 bytes
 ...63\250\345\206\214\344\270\255\345\277\203.png" | Bin 327966 -> 0 bytes
 ...37\245\350\257\242\346\214\211\351\222\256.png" | Bin 81788 -> 0 bytes
 ...16\206\345\217\262\347\225\214\351\235\242.png" | Bin 82340 -> 0 bytes
 ...17\230\351\207\217\347\225\214\351\235\242.png" | Bin 40073 -> 0 bytes
 ...11\247\350\241\214\346\227\245\345\277\227.png" | Bin 114314 -> 0 bytes
 ...05\215\347\275\256\347\225\214\351\235\242.png" | Bin 79698 -> 0 bytes
 ...56\241\347\220\206\347\225\214\351\235\242.png" | Bin 39198 -> 0 bytes
 ...72\224\347\224\250\347\261\273\345\236\213.png" | Bin 108864 -> 0 bytes
 ...74\225\346\223\216\344\277\241\346\201\257.png" | Bin 41814 -> 0 bytes
 ...20\206\345\221\230\350\247\206\345\233\276.png" | Bin 80087 -> 0 bytes
 ...74\226\350\276\221\347\233\256\345\275\225.png" | Bin 89919 -> 0 bytes
 ...56\241\347\220\206\347\225\214\351\235\242.png" | Bin 49277 -> 0 bytes
 ...275\277\347\224\250\346\226\207\346\241\243.md" | 193 ------
 ...275\277\347\224\250\346\226\207\346\241\243.md" | 389 -------------
 .../User_Manual/Linkis_Console_User_Manual.md      | 120 ----
 Linkis-Doc-master/zh_CN/User_Manual/README.md      |   8 -
 README.md                                          |  17 -
 src/assets/user/360.png => assets/360.bc39c47a.png | Bin
 .../97\347\211\251\350\201\224.2447251c.png"       | Bin
 assets/AddEngineConn.467c2210.js                   |   1 +
 assets/CliManual.8440dc3f.js                       |   1 +
 assets/ConsoleUserManual.d2af8060.js               |   1 +
 assets/DifferenceBetween1.0&0.x.7e9c261e.js        |   1 +
 .../ECM_all_engine_information.4b4099f5.png        | Bin
 .../ECM_editing_interface.a82c51cd.png             | Bin
 .../ECM_management_interface.764982ae.png          | Bin
 assets/HowToUse.212b1469.js                        |   1 +
 assets/JobSubmission.cf4b12e7.js                   |   1 +
 .../Linkis0.X_newengine_architecture.76e9d9b8.png  | Bin
 .../Linkis0.X_services_list.984b5164.png           | Bin
 .../Linkis1.0_combined_eureka.dad2589e.png         | Bin
 .../Linkis1.0_engineconn_architecture.7d420481.png | Bin
 .../Linkis1.0_newengine_architecture.e98645d5.png  | Bin
 ...Linkis1.0_newengine_initialization.6acbb6c3.png | Bin
 .../Linkis1.0_services_list.72702c4a.png           | Bin
 "assets/T3\345\207\272\350\241\214.1738b528.png"   | Bin 0 -> 6413 bytes
 assets/UserManual.905b8e9a.js                      |   1 +
 .../add_an_engineConn_flow_chart.d10a8d14.png      | Bin
 .../administrator_view.7c4869c3.png                | Bin
 .../after_linkis_en.c3ed71bf.png                   | Bin
 .../before_linkis_en.076cf10c.png                  | Bin
 .../boss\347\233\264\350\201\230.5353720c.png"     | Bin
 ...ce_name_to_view_engine_information.9b608268.png | Bin
 .../code-fix-01.620f0486.png                       | Bin
 .../db-config-01.5aa0a782.png                      | Bin
 .../db-config-02.f05b1586.png                      | Bin
 .../description.95f7a296.png                       | Bin
 assets/distributed.6a61f64e.js                     |   1 +
 .../distributed_deployment.d533f7c3.png            | Bin
 assets/download.8c6e40f3.css                       |   1 +
 assets/download.c3e47cb5.js                        |   1 +
 .../edit_directory.410557fd.png                    | Bin
 assets/engins.2a41b1a0.js                          |   1 +
 .../eureka_registration_center.261760f0.png        | Bin
 assets/event.29571be3.js                           |   1 +
 .../execution.png => assets/execution.2d8c96b7.png | Bin
 .../global_history_interface.68d7d00e.png          | Bin
 .../global_history_query_button.c9058b17.png       | Bin
 .../global_variable_interface.734e4b18.png         | Bin
 .../hive-config-01.e5d22d71.png                    | Bin
 .../incubator-logo.c3572a91.png                    | Bin
 assets/index.07e7576a.css                          |   1 +
 assets/index.2da1dc18.js                           |   1 +
 assets/index.5a6d4e60.js                           |   1 +
 assets/index.77f4f836.css                          |   1 +
 assets/index.82f016e4.css                          |   1 +
 assets/index.8d1f9740.js                           |   1 +
 assets/index.c51fb506.js                           |   1 +
 assets/index.c93f08c9.js                           |   1 +
 .../linkis-exception-01.a30b0cae.png               | Bin
 .../linkis-exception-02.c5d295a9.png               | Bin
 .../linkis-exception-03.8fc2f10f.png               | Bin
 .../linkis-exception-04.bb6736c1.png               | Bin
 .../linkis-exception-05.9b7af564.png               | Bin
 .../linkis-exception-06.ecfa4a11.png               | Bin
 .../linkis-exception-07.a1f28559.png               | Bin
 .../linkis-exception-08.dcdf1ce1.png               | Bin
 .../linkis-exception-09.f06ff470.png               | Bin
 .../linkis-exception-10.49a3d1ba.png               | Bin
 assets/linkis.d0790396.js                          |   1 +
 src/assets/logo.png => assets/logo.fb11029b.png    | Bin
 assets/main.3104c8a7.js                            |   1 +
 .../microservice_management_interface.9a76ac41.png | Bin
 assets/mobtech.b333dc91.png                        | Bin 0 -> 11676 bytes
 .../new_application_type.90ca0c6b.png              | Bin
 .../orchestrate.b395b673.png                       | Bin
 .../overall.png => assets/overall.d0b560e6.png     | Bin
 .../page-show-01.f6ac5799.png                      | Bin
 .../page-show-02.9d59cdcb.png                      | Bin
 .../page-show-03.63498698.png                      | Bin
 .../parameter_configuration_interface.6160c166.png | Bin
 .../physical_tree.6d05f37c.png                     | Bin
 assets/plugin-vue_export-helper.5a098b48.js        |   1 +
 .../queue_set.png => assets/queue_set.349ccfa6.png | Bin
 .../resource_management_interface.1334783f.png     | Bin
 .../result_acquisition.ccd9e593.png                | Bin
 .../shell-error-01.2e9d62b8.png                    | Bin
 .../shell-error-02.fba39b7b.png                    | Bin
 .../shell-error-03.666f92e3.png                    | Bin
 .../shell-error-04.910b89a7.png                    | Bin
 .../shell-error-05.f4057bcc.png                    | Bin
 .../sparksql_run.115bb5a7.png                      | Bin
 assets/structure.1bc4dbfc.js                       |   1 +
 .../submission.22e30fbd.png                        | Bin
 ...ask_execution_log_of_a_single_task.cf40fba8.png | Bin
 assets/team.13ce5e55.css                           |   1 +
 assets/team.c0178c87.js                            |   1 +
 assets/utils.7ca2fb6d.js                           |   1 +
 assets/vendor.12a5b039.js                          |  21 +
 .../workflow.png => assets/workflow.4526f490.png   | Bin
 ...4\270\234\346\226\271\351\200\232.4814e53c.png" | Bin
 ...5\275\251\347\247\221\346\212\200.d1ffcc7d.png" | Bin
 ...5\233\275\347\224\265\347\247\221.864feafc.jpg" | Bin
 ...4\277\241\346\234\215\345\212\241.6242b949.png" | Bin 0 -> 13177 bytes
 ...1\200\232\344\272\221\344\273\223.a785e23f.png" | Bin
 ...5\256\236\351\252\214\345\256\244.46d52eec.png" | Bin 0 -> 11054 bytes
 ...5\276\222\347\247\221\346\212\200.d6b063f3.png" | Bin
 .../\344\276\235\345\233\276.e1935876.png"         | Bin
 ...6\212\200\345\244\247\345\255\246.79502b9d.jpg" | Bin
 ...5\223\227\345\225\246\345\225\246.045c3b9e.jpg" | Bin
 ...5\244\226\345\220\214\345\255\246.9c81d026.png" | Bin
 ...5\244\251\347\277\274\344\272\221.ee336756.png" | Bin
 .../\345\271\263\345\256\211.d0212a59.png"         | Bin
 ...5\244\247\346\225\260\346\215\256.d21c18fc.png" | Bin 0 -> 7862 bytes
 ...1\231\220\345\205\254\345\217\270.66cf4318.png" | Bin
 ...1\255\202\347\275\221\347\273\234.3ec071b8.png" | Bin
 ...5\255\220\345\210\206\346\234\237.55aa406b.png" | Bin
 ...5\272\267\345\250\201\350\247\206.70f8122b.png" | Bin
 ...6\203\263\346\261\275\350\275\246.0123a918.png" | Bin
 ...7\231\276\346\234\233\344\272\221.c2c1293f.png" | Bin
 ...5\210\233\345\225\206\345\237\216.294fde8b.png" | Bin
 ...0\261\241\344\272\221\350\205\276.7417b5e6.png" | Bin
 ...5\210\233\346\231\272\350\236\215.188edcec.png" | Bin
 ...5\244\251\344\277\241\346\201\257.23b0d23c.png" | Bin
 ...4\275\263\347\224\237\346\264\273.b508c1dc.jpg" | Bin
 "assets/\350\215\243\350\200\200.ceda8b1e.png"     | Bin 0 -> 7780 bytes
 ...6\221\251\350\200\266\344\272\221.63ed5828.png" | Bin 0 -> 19705 bytes
 ...6\235\245\346\261\275\350\275\246.be672a01.jpg" | Bin
 ...6\212\200\345\244\247\345\255\246.3762b76e.jpg" | Bin
 ...7\202\271\350\275\257\344\273\266.389df8d5.png" | Bin
 favicon.ico                                        | Bin 0 -> 1595 bytes
 index.html                                         |   5 +-
 info.txt                                           |   5 -
 package-lock.json                                  | 647 ---------------------
 package.json                                       |  21 -
 public/favicon.ico                                 | Bin 4286 -> 0 bytes
 src/App.vue                                        | 249 --------
 src/assets/docs/EngineUsage/hive-config.png        | Bin 44717 -> 0 bytes
 src/assets/docs/EngineUsage/hive-run.png           | Bin 31403 -> 0 bytes
 src/assets/docs/EngineUsage/jdbc-conf.png          | Bin 46113 -> 0 bytes
 src/assets/docs/EngineUsage/jdbc-run.png           | Bin 21937 -> 0 bytes
 src/assets/docs/EngineUsage/pyspakr-run.png        | Bin 43552 -> 0 bytes
 src/assets/docs/EngineUsage/python-config.png      | Bin 47021 -> 0 bytes
 src/assets/docs/EngineUsage/python-run.png         | Bin 61451 -> 0 bytes
 src/assets/docs/EngineUsage/queue-set.png          | Bin 41298 -> 0 bytes
 src/assets/docs/EngineUsage/scala-run.png          | Bin 43959 -> 0 bytes
 src/assets/docs/EngineUsage/shell-run.png          | Bin 100312 -> 0 bytes
 src/assets/docs/EngineUsage/spark-conf.png         | Bin 53397 -> 0 bytes
 src/assets/docs/EngineUsage/sparksql-run.png       | Bin 46611 -> 0 bytes
 src/assets/docs/EngineUsage/workflow.png           | Bin 51259 -> 0 bytes
 src/assets/docs/Linkis_1.0_architecture.png        | Bin 316746 -> 0 bytes
 src/assets/docs/Tuning_and_Troubleshooting/Q&A.png | Bin 72259 -> 0 bytes
 .../Tuning_and_Troubleshooting/code-fix-01.png     | Bin 61855 -> 0 bytes
 .../Tuning_and_Troubleshooting/db-config-01.png    | Bin 157843 -> 0 bytes
 .../Tuning_and_Troubleshooting/db-config-02.png    | Bin 22153 -> 0 bytes
 .../docs/Tuning_and_Troubleshooting/debug-01.png   | Bin 3258 -> 0 bytes
 .../docs/Tuning_and_Troubleshooting/debug-02.png   | Bin 25521 -> 0 bytes
 .../docs/Tuning_and_Troubleshooting/debug-03.png   | Bin 14953 -> 0 bytes
 .../docs/Tuning_and_Troubleshooting/debug-04.png   | Bin 34622 -> 0 bytes
 .../docs/Tuning_and_Troubleshooting/debug-05.png   | Bin 20848 -> 0 bytes
 .../docs/Tuning_and_Troubleshooting/debug-06.png   | Bin 25477 -> 0 bytes
 .../docs/Tuning_and_Troubleshooting/debug-07.png   | Bin 113342 -> 0 bytes
 .../docs/Tuning_and_Troubleshooting/debug-08.png   | Bin 12338 -> 0 bytes
 .../Tuning_and_Troubleshooting/hive-config-01.png  | Bin 27332 -> 0 bytes
 .../linkis-exception-01.png                        | Bin 457236 -> 0 bytes
 .../linkis-exception-02.png                        | Bin 524390 -> 0 bytes
 .../linkis-exception-03.png                        | Bin 264782 -> 0 bytes
 .../linkis-exception-04.png                        | Bin 1014902 -> 0 bytes
 .../linkis-exception-05.png                        | Bin 207746 -> 0 bytes
 .../linkis-exception-06.png                        | Bin 348016 -> 0 bytes
 .../linkis-exception-07.png                        | Bin 842448 -> 0 bytes
 .../linkis-exception-08.png                        | Bin 499442 -> 0 bytes
 .../linkis-exception-09.png                        | Bin 442648 -> 0 bytes
 .../linkis-exception-10.png                        | Bin 149801 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-01.png    | Bin 39986 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-02.png    | Bin 220102 -> 0 bytes
 .../Tuning_and_Troubleshooting/page-show-03.png    | Bin 230234 -> 0 bytes
 .../searching_keywords.png                         | Bin 53652 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-01.png  | Bin 30629 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-02.png  | Bin 117077 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-03.png  | Bin 516777 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-04.png  | Bin 318990 -> 0 bytes
 .../Tuning_and_Troubleshooting/shell-error-05.png  | Bin 60031 -> 0 bytes
 .../docs/Tunning_And_Troubleshooting/debug-01.png  | Bin 3258 -> 0 bytes
 .../docs/Tunning_And_Troubleshooting/debug-02.png  | Bin 25521 -> 0 bytes
 .../docs/Tunning_And_Troubleshooting/debug-03.png  | Bin 14953 -> 0 bytes
 .../docs/Tunning_And_Troubleshooting/debug-04.png  | Bin 34622 -> 0 bytes
 .../docs/Tunning_And_Troubleshooting/debug-05.png  | Bin 20848 -> 0 bytes
 .../docs/Tunning_And_Troubleshooting/debug-06.png  | Bin 25477 -> 0 bytes
 .../docs/Tunning_And_Troubleshooting/debug-07.png  | Bin 113342 -> 0 bytes
 .../docs/Tunning_And_Troubleshooting/debug-08.png  | Bin 12338 -> 0 bytes
 .../add_an_EngineConn_flow_chart.png               | Bin 59893 -> 0 bytes
 .../docs/architecture/EngineConn/engineconn-01.png | Bin 157753 -> 0 bytes
 .../Gateway/gateway_server_dispatcher.png          | Bin 47910 -> 0 bytes
 .../architecture/Gateway/gateway_server_global.png | Bin 36652 -> 0 bytes
 .../docs/architecture/Gateway/gatway_websocket.png | Bin 16292 -> 0 bytes
 .../LabelManager/label_manager_builder.png         | Bin 62978 -> 0 bytes
 .../LabelManager/label_manager_global.png          | Bin 14988 -> 0 bytes
 .../LabelManager/label_manager_scorer.png          | Bin 72977 -> 0 bytes
 .../docs/architecture/Linkis1.0_architecture.png   | Bin 72168 -> 0 bytes
 .../ContextService/linkis-contextservice-01.png    | Bin 9188 -> 0 bytes
 .../ContextService/linkis-contextservice-02.png    | Bin 4953 -> 0 bytes
 .../linkis-contextservice-cache-01.png             | Bin 5500 -> 0 bytes
 .../linkis-contextservice-cache-02.png             | Bin 11546 -> 0 bytes
 .../linkis-contextservice-cache-03.png             | Bin 53416 -> 0 bytes
 .../linkis-contextservice-cache-04.png             | Bin 15785 -> 0 bytes
 .../linkis-contextservice-cache-05.png             | Bin 1488 -> 0 bytes
 .../linkis-contextservice-client-01.png            | Bin 18839 -> 0 bytes
 .../linkis-contextservice-client-02.png            | Bin 30023 -> 0 bytes
 .../linkis-contextservice-client-03.png            | Bin 11690 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-01.png | Bin 17605 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-02.png | Bin 10781 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-03.png | Bin 41714 -> 0 bytes
 .../ContextService/linkis-contextservice-ha-04.png | Bin 17550 -> 0 bytes
 .../linkis-contextservice-listener-01.png          | Bin 14209 -> 0 bytes
 .../linkis-contextservice-listener-02.png          | Bin 21055 -> 0 bytes
 .../linkis-contextservice-listener-03.png          | Bin 17902 -> 0 bytes
 .../linkis-contextservice-persistence-01.png       | Bin 107735 -> 0 bytes
 .../linkis-contextservice-search-01.png            | Bin 11874 -> 0 bytes
 .../linkis-contextservice-search-02.png            | Bin 8266 -> 0 bytes
 .../linkis-contextservice-search-03.png            | Bin 11321 -> 0 bytes
 .../linkis-contextservice-search-04.png            | Bin 9101 -> 0 bytes
 .../linkis-contextservice-search-05.png            | Bin 9133 -> 0 bytes
 .../linkis-contextservice-search-06.png            | Bin 11334 -> 0 bytes
 .../linkis-contextservice-search-07.png            | Bin 11391 -> 0 bytes
 .../linkis-contextservice-service-01.png           | Bin 27470 -> 0 bytes
 .../linkis-contextservice-service-02.png           | Bin 37730 -> 0 bytes
 .../linkis-contextservice-service-03.png           | Bin 12269 -> 0 bytes
 .../linkis-contextservice-service-04.png           | Bin 13462 -> 0 bytes
 src/assets/docs/architecture/bml_02.png            | Bin 55227 -> 0 bytes
 .../architecture/linkis_engineconnplugin_01.png    | Bin 8146 -> 0 bytes
 src/assets/docs/architecture/linkis_intro_01.png   | Bin 142195 -> 0 bytes
 src/assets/docs/architecture/linkis_intro_02.png   | Bin 102080 -> 0 bytes
 .../architecture/linkis_microservice_gov_01.png    | Bin 46380 -> 0 bytes
 .../architecture/linkis_microservice_gov_03.png    | Bin 30388 -> 0 bytes
 .../docs/architecture/linkis_publicservice_01.png  | Bin 25269 -> 0 bytes
 .../publicenhencement_architecture.png             | Bin 24844 -> 0 bytes
 .../docs/deploy/Linkis1.0_combined_eureka.png      | Bin 55811 -> 0 bytes
 src/assets/docs/wedatasphere_contact_01.png        | Bin 217762 -> 0 bytes
 src/assets/docs/wedatasphere_stack_Linkis.png      | Bin 203466 -> 0 bytes
 src/assets/fqa/Q&A.png                             | Bin 72259 -> 0 bytes
 src/assets/fqa/debug-01.png                        | Bin 3258 -> 0 bytes
 src/assets/fqa/debug-02.png                        | Bin 25521 -> 0 bytes
 src/assets/fqa/debug-03.png                        | Bin 14953 -> 0 bytes
 src/assets/fqa/debug-04.png                        | Bin 34622 -> 0 bytes
 src/assets/fqa/debug-05.png                        | Bin 20848 -> 0 bytes
 src/assets/fqa/debug-06.png                        | Bin 25477 -> 0 bytes
 src/assets/fqa/debug-07.png                        | Bin 113342 -> 0 bytes
 src/assets/fqa/debug-08.png                        | Bin 12338 -> 0 bytes
 src/assets/fqa/searching_keywords.png              | Bin 53652 -> 0 bytes
 src/assets/home/after_linkis_zh.png                | Bin 188079 -> 0 bytes
 src/assets/home/before_linkis_zh.png               | Bin 101665 -> 0 bytes
 src/assets/image/github_user.png                   | Bin 4677 -> 0 bytes
 "src/assets/user/T3\345\207\272\350\241\214.png"   | Bin 7258 -> 0 bytes
 src/assets/user/mobtech..png                       | Bin 1829 -> 0 bytes
 ...70\207\347\247\221\351\207\207\347\255\221.png" | Bin 2468 -> 0 bytes
 ...60\221\347\224\237\351\223\266\350\241\214.jpg" | Bin 16640 -> 0 bytes
 ...70\255\345\233\275\347\224\265\344\277\241.png" | Bin 6468 -> 0 bytes
 ...34\211\351\231\220\345\205\254\345\217\270.png" | Bin 10006 -> 0 bytes
 ...61\237\345\256\236\351\252\214\345\256\244.png" | Bin 13145 -> 0 bytes
 ...72\244\351\200\232\351\223\266\350\241\214.jpg" | Bin 8099 -> 0 bytes
 ...72\254\344\270\234\346\225\260\347\247\221.jpg" | Bin 7895 -> 0 bytes
 ...77\241\347\224\250\347\224\237\346\264\273.png" | Bin 3978 -> 0 bytes
 ...14\273\344\277\235\347\247\221\346\212\200.png" | Bin 2083 -> 0 bytes
 ...72\221\345\276\231\347\247\221\346\212\200.png" | Bin 15448 -> 0 bytes
 ...03\275\345\244\247\346\225\260\346\215\256.png" | Bin 13462 -> 0 bytes
 ...13\233\345\225\206\351\223\266\350\241\214.jpg" | Bin 10462 -> 0 bytes
 ...31\276\344\277\241\351\223\266\350\241\214.jpg" | Bin 6739 -> 0 bytes
 ...76\216\345\233\242\347\202\271\350\257\204.jpg" | Bin 10596 -> 0 bytes
 ...05\276\350\256\257\350\264\242\347\273\217.jpg" | Bin 14500 -> 0 bytes
 ...20\250\346\221\251\350\200\266\344\272\221.png" | Bin 10090 -> 0 bytes
 ...02\256\346\224\277\351\223\266\350\241\214.jpg" | Bin 14657 -> 0 bytes
 src/components/HelloWorld.vue                      |  40 --
 src/docs/architecture/AddEngineConn_en.md          | 105 ----
 src/docs/architecture/AddEngineConn_zh.md          | 111 ----
 .../architecture/DifferenceBetween1.0&0.x_en.md    |  50 --
 .../architecture/DifferenceBetween1.0&0.x_zh.md    |  98 ----
 src/docs/architecture/JobSubmission_en.md          | 138 -----
 src/docs/architecture/JobSubmission_zh.md          | 165 ------
 src/docs/deploy/distributed_en.md                  |  98 ----
 src/docs/deploy/distributed_zh.md                  | 100 ----
 src/docs/deploy/engins_en.md                       |  82 ---
 src/docs/deploy/engins_zh.md                       | 106 ----
 src/docs/deploy/linkis_en.md                       | 246 --------
 src/docs/deploy/linkis_zh.md                       | 256 --------
 src/docs/deploy/main_en.md                         |   1 -
 src/docs/deploy/main_zh.md                         |   1 -
 src/docs/deploy/structure_en.md                    | 198 -------
 src/docs/deploy/structure_zh.md                    | 186 ------
 src/docs/manual/CliManual_en.md                    | 193 ------
 src/docs/manual/CliManual_zh.md                    | 193 ------
 src/docs/manual/ConsoleUserManual_en.md            | 120 ----
 src/docs/manual/ConsoleUserManual_zh.md            | 120 ----
 src/docs/manual/HowToUse_en.md                     |  28 -
 src/docs/manual/HowToUse_zh.md                     |  20 -
 src/docs/manual/UserManual_en.md                   | 400 -------------
 src/docs/manual/UserManual_zh.md                   | 389 -------------
 src/i18n/en.json                                   |  64 --
 src/i18n/index.js                                  |  48 --
 src/i18n/zh.json                                   |  63 --
 src/js/config.js                                   |   9 -
 src/js/utils.js                                    |  10 -
 src/main.js                                        |  21 -
 src/pages/blog/AddEngineConn_en.md                 | 105 ----
 src/pages/blog/AddEngineConn_zh.md                 | 111 ----
 src/pages/blog/blogdata_en.js                      |  13 -
 src/pages/blog/blogdata_zh.js                      |  13 -
 src/pages/blog/event.vue                           |  38 --
 src/pages/blog/index.vue                           |  64 --
 src/pages/docs/architecture/AddEngineConn.vue      |  13 -
 .../docs/architecture/DifferenceBetween1.0&0.x.vue |  13 -
 src/pages/docs/architecture/JobSubmission.vue      |  13 -
 src/pages/docs/deploy/distributed.vue              |  13 -
 src/pages/docs/deploy/engins.vue                   |  13 -
 src/pages/docs/deploy/linkis.vue                   |  13 -
 src/pages/docs/deploy/main.vue                     |  13 -
 src/pages/docs/deploy/structure.vue                |  13 -
 src/pages/docs/docsdata_en.js                      |  62 --
 src/pages/docs/docsdata_zh.js                      |  62 --
 src/pages/docs/index.vue                           | 105 ----
 src/pages/docs/manual/CliManual.vue                |  13 -
 src/pages/docs/manual/ConsoleUserManual.vue        |  13 -
 src/pages/docs/manual/HowToUse.vue                 |  13 -
 src/pages/docs/manual/UserManual.vue               |  13 -
 src/pages/download.vue                             |  64 --
 src/pages/faq/faq_en.md                            | 255 --------
 src/pages/faq/faq_zh.md                            | 257 --------
 src/pages/faq/index.vue                            |  46 --
 src/pages/home/data.js                             | 585 -------------------
 src/pages/home/img.js                              |  50 --
 src/pages/home/index.vue                           | 232 --------
 src/pages/team/team.vue                            | 124 ----
 src/pages/team/teamdata_en.js                      | 130 -----
 src/pages/team/teamdata_zh.js                      | 130 -----
 src/router.js                                      |  91 ---
 src/style/base.less                                | 146 -----
 src/style/variable.less                            |   2 -
 vite.config.js                                     |  16 -
 804 files changed, 52 insertions(+), 19990 deletions(-)

diff --git a/.vscode/extensions.json b/.vscode/extensions.json
deleted file mode 100644
index 3dc5b08..0000000
--- a/.vscode/extensions.json
+++ /dev/null
@@ -1,3 +0,0 @@
-{
-  "recommendations": ["johnsoncodehk.volar"]
-}
diff --git a/Linkis-Doc-master/LANGS.md b/Linkis-Doc-master/LANGS.md
deleted file mode 100644
index 5f72105..0000000
--- a/Linkis-Doc-master/LANGS.md
+++ /dev/null
@@ -1,2 +0,0 @@
-* [English](en_US)
-* [中文](zh_CN)
\ No newline at end of file
diff --git a/Linkis-Doc-master/README.md b/Linkis-Doc-master/README.md
deleted file mode 100644
index bc802e0..0000000
--- a/Linkis-Doc-master/README.md
+++ /dev/null
@@ -1,114 +0,0 @@
-Linkis
-==========
-
-[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
-
-[English](README.md) | [中文](README_CN.md)
-
-# Introduction
-
- Linkis builds a layer of computation middleware between upper applications and underlying engines. By using standard interfaces such as REST/WS/JDBC provided by Linkis, the upper applications can easily access the underlying engines such as MySQL/Spark/Hive/Presto/Flink, etc., and achieve the intercommunication of user resources like unified variables, scripts, UDFs, functions and resource files at the same time.
-
-As a computation middleware, Linkis provides powerful connectivity, reuse, orchestration, expansion, and governance capabilities. By decoupling the application layer and the engine layer, it simplifies the complex network call relationship, and thus reduces the overall complexity and saves the development and maintenance costs as well.
-
-Since the first release of Linkis in 2019, it has accumulated more than **700** trial companies and **1000+** sandbox trial users, which involving diverse industries, from finance, banking, tele-communication, to manufactory, internet companies and so on. Lots of companies have already used Linkis as a unified entrance for the underlying computation and storage engines of the big data platform.
-
-
-![linkis-intro-01](https://user-images.githubusercontent.com/11496700/84615498-c3030200-aefb-11ea-9b16-7e4058bf6026.png)
-
-![linkis-intro-03](https://user-images.githubusercontent.com/11496700/84615483-bb435d80-aefb-11ea-81b5-67f62b156628.png)
-
-# Features
-
-- **Support for diverse underlying computation storage engines**.  
-    Currently supported computation/storage engines: Spark, Hive, Python, Presto, ElasticSearch, MLSQL, TiSpark, JDBC, Shell, etc;      
-    Computation/storage engines to be supported: Flink, Impala, etc;      
-    Supported scripting languages: SparkSQL, HiveQL, Python, Shell, Pyspark, R, Scala and JDBC, etc.  
-  
-- **Powerful task/request governance capabilities**. With services such as Orchestrator, Label Manager and customized Spring Cloud Gateway, Linkis is able to provide multi-level labels based, cross-cluster/cross-IDC fine-grained routing, load balance, multi-tenancy, traffic control, resource control, and orchestration strategies like dual-active, active-standby, etc.  
-
-- **Support full stack computation/storage engine**. As a computation middleware, it will receive, execute and manage tasks and requests for various computation storage engines, including batch tasks, interactive query tasks, real-time streaming tasks and storage tasks;
-
-- **Resource management capabilities**.  ResourceManager is not only capable of managing resources for Yarn and Linkis EngineManger as in Linkis 0.X, but also able to provide label-based multi-level resource allocation and recycling, allowing itself to have powerful resource management capabilities across mutiple Yarn clusters and mutiple computation resource types;
-
-- **Unified Context Service**. Generate Context ID for each task/request,  associate and manage user and system resource files (JAR, ZIP, Properties, etc.), result set, parameter variable, function, etc., across user, system, and computing engine. Set in one place, automatic reference everywhere;
-
-- **Unified materials**. System and user-level unified material management, which can be shared and transferred across users and systems.
-
-# Supported engine types
-
-| **Engine** | **Supported Version** | **Linkis 0.X version requirement**| **Linkis 1.X version requirement** | **Description** |
-|:---- |:---- |:---- |:---- |:---- |
-|Flink |1.11.0|\>=dev-0.12.0, PR #703 not merged yet.|ongoing|	Flink EngineConn. Supports FlinkSQL code, and also supports Flink Jar to Linkis Manager to start a new Yarn application.|
-|Impala|\>=3.2.0, CDH >=6.3.0"|\>=dev-0.12.0, PR #703 not merged yet.|ongoing|Impala EngineConn. Supports Impala SQL.|
-|Presto|\>= 0.180|\>=0.11.0|ongoing|Presto EngineConn. Supports Presto SQL.|
-|ElasticSearch|\>=6.0|\>=0.11.0|ongoing|ElasticSearch EngineConn. Supports SQL and DSL code.|
-|Shell|Bash >=2.0|\>=0.9.3|\>=1.0.0_rc1|Shell EngineConn. Supports shell code.|
-|MLSQL|\>=1.1.0|\>=0.9.1|ongoing|MLSQL EngineConn. Supports MLSQL code.|
-|JDBC|MySQL >=5.0, Hive >=1.2.1|\>=0.9.0|\>=1.0.0_rc1|JDBC EngineConn. Supports MySQL and HiveQL code.|
-|Spark|Apache 2.0.0~2.4.7, CDH >=5.4.0|\>=0.5.0|\>=1.0.0_rc1|Spark EngineConn. Supports SQL, Scala, Pyspark and R code.|
-|Hive|Apache >=1.0.0, CDH >=5.4.0|\>=0.5.0|\>=1.0.0_rc1|Hive EngineConn. Supports HiveQL code.|
-|Hadoop|Apache >=2.6.0, CDH >=5.4.0|\>=0.5.0|ongoing|Hadoop EngineConn. Supports Hadoop MR/YARN application.|
-|Python|\>=2.6|\>=0.5.0|\>=1.0.0_rc1|Python EngineConn. Supports python code.|
-|TiSpark|1.1|\>=0.5.0|ongoing|TiSpark EngineConn. Support querying TiDB data by SparkSQL.|
-
-# Download
-
-Please go to the [Linkis releases page](https://github.com/WeBankFinTech/Linkis/wiki/Linkis-Releases) to download a compiled distribution or a source code package of Linkis.
-
-# Compile and deploy
-Please follow [Compile Guide](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Development_Documents/Linkis%E7%BC%96%E8%AF%91%E6%96%87%E6%A1%A3.md) to compile Linkis from source code.  
-Please refer to [Deployment_Documents](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Deployment_Documents) to do the deployment. 
-
-# Examples and Guidance
-You can find examples and guidance for how to use and manage Linkis in [User_Manual](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/User_Manual), [Engine_Usage_Documents](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Engine_Usage_Documentations) and [API_Documents](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/API_Documentations).
-
-# Documentation
-
-The documentation of linkis is in [Linkis-Doc](https://github.com/WeBankFinTech/Linkis-Doc) or in the [wiki](https://github.com/WeBankFinTech/Linkis/wiki).
-
-# Architecture
-Linkis services could be divided into three categories: computation governance services, public enhancement services and microservice governance services.  
-- The computation governance services, support the 3 major stages of processing a task/request: submission -> preparation -> execution;  
-- The public enhancement services, including the material library service, context service, and data source service;  
-- The microservice governance services, including Spring Cloud Gateway, Eureka and Open Feign.
-
-Below is the Linkis architecture diagram. You can find more detailed architecture docs in [Linkis-Doc/Architecture](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Architecture_Documents).
-![architecture](en_US/Images/Linkis_1.0_architecture.png)
-
-Based on Linkis the computation middleware, we've built a lot of applications and tools on top of it in the big data platform suite [WeDataSphere](https://github.com/WeBankFinTech/WeDataSphere). Below are the currently available open-source projects.
-
-![wedatasphere_stack_Linkis](en_US/Images/wedatasphere_stack_Linkis.png)
-
-- [**DataSphere Studio** - Data Application Integration& Development Framework](https://github.com/WeBankFinTech/DataSphereStudio)
-
-- [**Scriptis** - Data Development IDE Tool](https://github.com/WeBankFinTech/Scriptis)
-
-- [**Visualis** - Data Visualization Tool](https://github.com/WeBankFinTech/Visualis)
-
-- [**Schedulis** - Workflow Task Scheduling Tool](https://github.com/WeBankFinTech/Schedulis)
-
-- [**Qualitis** - Data Quality Tool](https://github.com/WeBankFinTech/Qualitis)
-
-- [**MLLabis** - Machine Learning Notebook IDE](https://github.com/WeBankFinTech/prophecis)
-
-More projects upcoming, please stay tuned.
-
-# Contributing
-
-Contributions are always welcomed, we need more contributors to build Linkis together. either code, or doc, or other supports that could help the community.  
-For code and documentation contributions, please follow the [contribution guide](https://github.com/WeBankFinTech/Linkis/blob/master/Contributing_CN.md).
-
-# Contact Us
-
-Any questions or suggestions please kindly submit an issue.  
-You can scan the QR code below to join our WeChat and QQ group to get more immediate response.
-
-![introduction05](en_US/Images/wedatasphere_contact_01.png)
-
-Meetup videos on [Bilibili](https://space.bilibili.com/598542776?from=search&seid=14344213924133040656).
-
-# Who is Using Linkis
-
-We opened [an issue](https://github.com/WeBankFinTech/Linkis/issues/23) for users to feedback and record who is using Linkis.  
-Since the first release of Linkis in 2019, it has accumulated more than **700** trial companies and **1000+** sandbox trial users, which involving diverse industries, from finance, banking, tele-communication, to manufactory, internet companies and so on.
\ No newline at end of file
diff --git a/Linkis-Doc-master/README_CN.md b/Linkis-Doc-master/README_CN.md
deleted file mode 100644
index e926d6e..0000000
--- a/Linkis-Doc-master/README_CN.md
+++ /dev/null
@@ -1,105 +0,0 @@
-Linkis
-============
-
-[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
-
-[English](README.md) | [中文](README_CN.md)
-
-# 介绍
-
-Linkis 在上层应用程序和底层引擎之间构建了一层计算中间件。通过使用Linkis 提供的REST/WebSocket/JDBC 等标准接口,上层应用可以方便地连接访问MySQL/Spark/Hive/Presto/Flink 等底层引擎,同时实现变量、脚本、函数和资源文件等用户资源的跨上层应用互通。  
-作为计算中间件,Linkis 提供了强大的连通、复用、编排、扩展和治理管控能力。通过计算中间件将应用层和引擎层解耦,简化了复杂的网络调用关系,降低了整体复杂度,同时节约了整体开发和维护成本。  
-Linkis 自2019年开源发布以来,已累计积累了700多家试验企业和1000+沙盒试验用户,涉及金融、电信、制造、互联网等多个行业。许多公司已经将Linkis 作为大数据平台底层计算存储引擎的统一入口,和计算请求/任务的治理管控利器。
-
-![没有Linkis 之前](zh_CN/Images/before_linkis_cn.png)
-
-![有了Linkis 之后](zh_CN/Images/after_linkis_cn.png)
-
-# 核心特点
-
-- **丰富的底层计算存储引擎支持**。  
-    **目前支持的计算存储引擎**:Spark、Hive、Python、Presto、ElasticSearch、MLSQL、TiSpark、JDBC和Shell等。  
-    **正在支持中的计算存储引擎**:Flink、Impala等。  
-    **支持的脚本语言**:SparkSQL, HiveQL, Python, Shell, Pyspark, R, Scala 和JDBC 等。    
-- **强大的计算治理能力**。基于Orchestrator、Label Manager和定制的Spring Cloud Gateway等服务,Linkis能够提供基于多级标签的跨集群/跨IDC 细粒度路由、负载均衡、多租户、流量控制、资源控制和编排策略(如双活、主备等)支持能力。  
-- **全栈计算存储引擎架构支持**。能够接收、执行和管理针对各种计算存储引擎的任务和请求,包括离线批量任务、交互式查询任务、实时流式任务和存储型任务;
-- **资源管理能力**。 ResourceManager 不仅具备 Linkis0.X 对 Yarn 和 Linkis EngineManager 的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让 ResourceManager 具备跨集群、跨计算资源类型的强大资源管理能力。
-- **统一上下文服务**。为每个计算任务生成context id,跨用户、系统、计算引擎的关联管理用户和系统资源文件(JAR、ZIP、Properties等),结果集,参数变量,函数等,一处设置,处处自动引用;
-- **统一物料**。系统和用户级物料管理,可分享和流转,跨用户、系统共享物料。
-
-# 支持的引擎类型
-
-| **引擎** | **引擎版本** | **Linkis 0.X 版本要求**| **Linkis 1.X 版本要求** | **说明** |
-|:---- |:---- |:---- |:---- |:---- |
-|Flink |1.11.0|\>=dev-0.12.0, PR #703 尚未合并|ongoing|	Flink EngineConn。支持FlinkSQL 代码,也支持以Flink Jar 形式启动一个新的Yarn 应用程序。|
-|Impala|\>=3.2.0, CDH >=6.3.0"|\>=dev-0.12.0, PR #703 尚未合并|ongoing|Impala EngineConn. 支持Impala SQL 代码.|
-|Presto|\>= 0.180|\>=0.11.0|ongoing|Presto EngineConn. 支持Presto SQL 代码.|
-|ElasticSearch|\>=6.0|\>=0.11.0|ongoing|ElasticSearch EngineConn. 支持SQL 和DSL 代码.|
-|Shell|Bash >=2.0|\>=0.9.3|\>=1.0.0_rc1|Shell EngineConn. 支持Bash shell 代码.|
-|MLSQL|\>=1.1.0|\>=0.9.1|ongoing|MLSQL EngineConn. 支持MLSQL 代码.|
-|JDBC|MySQL >=5.0, Hive >=1.2.1|\>=0.9.0|\>=1.0.0_rc1|JDBC EngineConn. 已支持MySQL 和HiveQL,可快速扩展支持其他有JDBC Driver 包的引擎, 如Oracle.
-|Spark|Apache 2.0.0~2.4.7, CDH >=5.4.0|\>=0.5.0|\>=1.0.0_rc1|Spark EngineConn. 支持SQL, Scala, Pyspark 和R 代码.|
-|Hive|Apache >=1.0.0, CDH >=5.4.0|\>=0.5.0|\>=1.0.0_rc1|Hive EngineConn. 支持HiveQL 代码.|
-|Hadoop|Apache >=2.6.0, CDH >=5.4.0|\>=0.5.0|ongoing|Hadoop EngineConn. 支持Hadoop MR/YARN application.|
-|Python|\>=2.6|\>=0.5.0|\>=1.0.0_rc1|Python EngineConn. 支持python 代码.|
-|TiSpark|1.1|\>=0.5.0|ongoing|TiSpark EngineConn. 支持用SparkSQL 查询TiDB.|
-
-# 下载
-
-请前往[Linkis releases 页面](https://github.com/WeBankFinTech/Linkis/wiki/Linkis-Releases) 下载Linkis 的已编译版本或源码包。
-
-# 编译和安装部署
-请参照[编译指引](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Development_Documents/Linkis%E7%BC%96%E8%AF%91%E6%96%87%E6%A1%A3.md) 来编译Linkis 源码。  
-请参考[安装部署文档](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Deployment_Documents) 来部署Linkis。
-
-# 示例和使用指引
-请到 [用户手册](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/User_Manual), [各引擎使用指引](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Engine_Usage_Documentations) 和[API 文档](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/API_Documentations) 中,查看如何使用和管理Linkis 的示例和指引。
-
-# 文档
-
-完整的Linkis 文档参见[Linkis-Doc](https://github.com/WeBankFinTech/Linkis-Doc) 或[wiki](https://github.com/WeBankFinTech/Linkis/wiki).  
-
-# 架构概要
-Linkis 基于微服务架构开发,其服务可以分为3类:计算治理服务、公共增强服务和微服务治理服务。  
-- 计算治理服务,支持计算任务/请求处理流程的3个主要阶段:提交->准备->执行;
-- 公共增强服务,包括上下文服务、物料管理服务及数据源服务等;
-- 微服务治理服务,包括定制化的Spring Cloud Gateway、Eureka、Open Feign。
-
-下面是Linkis 的架构概要图. 更多详细架构文档请见 [Linkis-Doc/Architecture](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Architecture_Documents).
-![architecture](en_US/Images/Linkis_1.0_architecture.png)
-
-基于Linkis 计算中间件,我们在大数据平台套件[WeDataSphere](https://github.com/WeBankFinTech/WeDataSphere) 中构建了许多应用和工具系统。下面是目前可用的开源项目。
-
-![wedatasphere_stack_Linkis](en_US/Images/wedatasphere_stack_Linkis.png)
-
-- [**DataSphere Studio** - 数据应用集成开发框架](https://github.com/WeBankFinTech/DataSphereStudio)
-
-- [**Scriptis** - 数据研发IDE工具](https://github.com/WeBankFinTech/Scriptis)
-
-- [**Visualis** - 数据可视化工具](https://github.com/WeBankFinTech/Visualis)
-
-- [**Schedulis** - 工作流调度工具](https://github.com/WeBankFinTech/Schedulis)
-
-- [**Qualitis** - 数据质量工具](https://github.com/WeBankFinTech/Qualitis)
-
-- [**MLLabis** - 容器化机器学习notebook 开发环境](https://github.com/WeBankFinTech/prophecis)
-
-更多项目开源准备中,敬请期待。
-
-# 贡献
-
-我们非常欢迎和期待更多的贡献者参与共建Linkis, 不论是代码、文档,或是其他能够帮助到社区的贡献形式。  
-代码和文档相关的贡献请参照[贡献指引](https://github.com/WeBankFinTech/Linkis/blob/master/Contributing_CN.md).
-
-# 联系我们
-
-对Linkis 的任何问题和建议,敬请提交issue,以便跟踪处理和经验沉淀共享。  
-您也可以扫描下面的二维码,加入我们的微信/QQ群,以获得更快速的响应。
-![introduction05](en_US/Images/wedatasphere_contact_01.png)
-
-Meetup 视频 [Bilibili](https://space.bilibili.com/598542776?from=search&seid=14344213924133040656).
-
-# 谁在使用Linkis
-
-我们创建了[一个 issue](https://github.com/WeBankFinTech/Linkis/issues/23) 以便用户反馈和记录谁在使用Linkis.  
-Linkis 自2019年开源发布以来,累计已有700多家试验企业和1000+沙盒试验用户,涉及金融、电信、制造、互联网等多个行业。
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/API_Documentations/JDBC_API_Document.md b/Linkis-Doc-master/en_US/API_Documentations/JDBC_API_Document.md
deleted file mode 100644
index 72b3f3a..0000000
--- a/Linkis-Doc-master/en_US/API_Documentations/JDBC_API_Document.md
+++ /dev/null
@@ -1,45 +0,0 @@
-# Task Submission And Execution Of JDBC API Documents
-### 1. Introduce Dependent Modules
-The first way depends on the JDBC module in the pom:  
-```xml
-<dependency>
-    <groupId>com.webank.wedatasphere.linkis</groupId>
-    <artifactId>linkis-ujes-jdbc</artifactId>
-    <version>${linkis.version}</version>
- </dependency>
-```  
-**Note:** The module has not been deployed to the central warehouse. You need to execute `mvn install -Dmaven.test.skip=true` in the ujes/jdbc directory for local installation.
-
-**The second way is through packaging and compilation:**
-1. Enter the ujes/jdbc directory in the Linkis project and enter the command in the terminal to package `mvn assembly:assembly -Dmaven.test.skip=true`
-The packaging instruction skips the running of the unit test and the compilation of the test code, and packages the dependencies required by the JDBC module into the Jar package.  
-2. After the packaging is complete, two Jar packages will be generated in the target directory of JDBC. The one with dependencies in the Jar package name is the Jar package we need.  
-### Second, create a test category:
-Establish a Java test class LinkisClientImplTestJ, the specific interface meaning can be seen in the notes:  
-```java
- public static void main(String[] args) throws SQLException, ClassNotFoundException {
-
-        //1. Load driver class:com.webank.wedatasphere.linkis.ujes.jdbc.UJESSQLDriver
-        Class.forName("com.webank.wedatasphere.linkis.ujes.jdbc.UJESSQLDriver");
-
-        //2. Get connection:jdbc:linkis://gatewayIP:gatewayPort
-        //   the front-end account password
-        Connection connection =  DriverManager.getConnection("jdbc:linkis://127.0.0.1:9001","username","password");
-
-        //3. Create statement and execute query
-        Statement st= connection.createStatement();
-        ResultSet rs=st.executeQuery("show tables");
-        //4. Processing the returned results of the database (using the ResultSet class)
-        while (rs.next()) {
-            ResultSetMetaData metaData = rs.getMetaData();
-            for (int i = 1; i <= metaData.getColumnCount(); i++) {
-                System.out.print(metaData.getColumnName(i) + ":" +metaData.getColumnTypeName(i)+": "+ rs.getObject(i) + "    ");
-            }
-            System.out.println();
-        }
-        // close resourse
-        rs.close();
-        st.close();
-        connection.close();
-    }
-```
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/API_Documentations/Linkis_task_submission_and_execution_RestAPI_document.md b/Linkis-Doc-master/en_US/API_Documentations/Linkis_task_submission_and_execution_RestAPI_document.md
deleted file mode 100644
index a7fb568..0000000
--- a/Linkis-Doc-master/en_US/API_Documentations/Linkis_task_submission_and_execution_RestAPI_document.md
+++ /dev/null
@@ -1,170 +0,0 @@
-# Linkis Task submission and execution Rest API document
-
-- The return of the Linkis Restful interface follows the following standard return format:
-
-```json
-{
- "method": "",
- "status": 0,
- "message": "",
- "data": {}
-}
-```
-
-**Convention**:
-
- - method: Returns the requested Restful API URI, which is mainly used in WebSocket mode.
- - status: return status information, where: -1 means no login, 0 means success, 1 means error, 2 means verification failed, 3 means no access to the interface.
- - data: return specific data.
- - message: return the requested prompt message. If the status is not 0, the message returned is an error message, and the data may have a stack field, which returns specific stack information.
- 
-For more information about the Linkis Restful interface specification, please refer to: [Linkis Restful Interface Specification](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Development_Documents/Development_Specification/API.md)
-
-### 1). Submit for execution
-
-- Interface `/api/rest_j/v1/entrance/execute`
-
-- Submission method `POST`
-
-```json
-{
-    "executeApplicationName": "hive", //Engine type
-    "requestApplicationName": "dss", //Client service type
-    "executionCode": "show tables",
-    "params": {"variable": {}, "configuration": {}},
-    "runType": "hql", //The type of script to run
-    "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
-}
-```
-
-- Interface `/api/rest_j/v1/entrance/submit`
-
-- Submission method `POST`
-
-```json
-{
-    "executionContent": {"code": "show tables", "runType": "sql"},
-    "params": {"variable": {}, "configuration": {}},
-    "source": {"scriptPath": "file:///mnt/bdp/hadoop/1.hql"},
-    "labels": {
-        "engineType": "spark-2.4.3",
-        "userCreator": "hadoop-IDE"
-    }
-}
-```
-
-
--Return to example
-
-```json
-{
- "method": "/api/rest_j/v1/entrance/execute",
- "status": 0,
- "message": "Request executed successfully",
- "data": {
-   "execID": "030418IDEhivebdpdwc010004:10087IDE_hadoop_21",
-   "taskID": "123"
- }
-}
-```
-
-- execID is the unique identification execution ID generated for the task after the user task is submitted to Linkis. It is of type String. This ID is only useful when the task is running, similar to the concept of PID. The design of ExecID is `(requestApplicationName length)(executeAppName length)(Instance length)${requestApplicationName}${executeApplicationName}${entranceInstance information ip+port}${requestApplicationName}_${umUser}_${index}`
-
-- taskID is the unique ID that represents the task submitted by the user. This ID is generated by the database self-increment and is of Long type
-
-
-### 2).Get status
-
-- Interface `/api/rest_j/v1/entrance/${execID}/status`
-
-- Submission method `GET`
-
-- Return to example
-
-```json
-{
- "method": "/api/rest_j/v1/entrance/{execID}/status",
- "status": 0,
- "message": "Get status successful",
- "data": {
-   "execID": "${execID}",
-   "status": "Running"
- }
-}
-```
-
-### 3).Get logs
-
-- Interface `/api/rest_j/v1/entrance/${execID}/log?fromLine=${fromLine}&size=${size}`
-
-- Submission method `GET`
-
-- The request parameter fromLine refers to the number of lines from which to get, and size refers to the number of lines of logs that this request gets
-
-- Return example, where the returned fromLine needs to be used as a parameter for the next request of this interface
-
-```json
-{
-  "method": "/api/rest_j/v1/entrance/${execID}/log",
-  "status": 0,
-  "message": "Return log information",
-  "data": {
-    "execID": "${execID}",
-  "log": ["error log","warn log","info log", "all log"],
-  "fromLine": 56
-  }
-}
-```
-
-### 4). Get progress
-
-- Interface `/api/rest_j/v1/entrance/${execID}/progress`
-
-- Submission method `GET`<br>
-
-- Return to example
-
-```json
-{
-  "method": "/api/rest_j/v1/entrance/{execID}/progress",
-  "status": 0,
-  "message": "Return progress information",
-  "data": {
-    "execID": "${execID}",
-    "progress": 0.2,
-    "progressInfo": [
-        {
-        "id": "job-1",
-        "succeedTasks": 2,
-        "failedTasks": 0,
-        "runningTasks": 5,
-        "totalTasks": 10
-        },
-        {
-        "id": "job-2",
-        "succeedTasks": 5,
-        "failedTasks": 0,
-        "runningTasks": 5,
-        "totalTasks": 10
-        }
-    ]
-  }
-}
-```
-
-### 5).kill task
-
-- Interface `/api/rest_j/v1/entrance/${execID}/kill`
-
-- Submission method `POST`
-
-```json
-{
- "method": "/api/rest_j/v1/entrance/{execID}/kill",
- "status": 0,
- "message": "OK",
- "data": {
-   "execID":"${execID}"
-  }
-}
-```
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/API_Documentations/Login_API.md b/Linkis-Doc-master/en_US/API_Documentations/Login_API.md
deleted file mode 100644
index be7e504..0000000
--- a/Linkis-Doc-master/en_US/API_Documentations/Login_API.md
+++ /dev/null
@@ -1,125 +0,0 @@
-# Login Document
-## 1. Docking With LDAP Service
-
-Enter the /conf/linkis-spring-cloud-services/linkis-mg-gateway directory and execute the command:  
-```bash
-    vim linkis-server.properties
-```    
-
-Add LDAP related configuration:  
-```bash
-wds.linkis.ldap.proxy.url=ldap://127.0.0.1:389/ #LDAP service URL
-wds.linkis.ldap.proxy.baseDN=dc=webank,dc=com #Configuration of LDAP service    
-```    
-
-## 2. How To Open The Test Mode To Achieve Login-Free
-
-Enter the /conf/linkis-spring-cloud-services/linkis-mg-gateway directory and execute the command:
-```bash
-    vim linkis-server.properties
-```
-    
-    
-Turn on the test mode and the parameters are as follows:
-```bash
-    wds.linkis.test.mode=true   # Open test mode
-    wds.linkis.test.user=hadoop  # Specify which user to delegate all requests to in test mode
-```
-
-## 3.Log In Interface Summary
-We provide the following login-related interfaces:
- - [Login In](#1LoginIn)
-
- - [Login Out](#2LoginOut)
-
- - [Heart Beat](#3HeartBeat)
- 
-
-## 4. Interface details
-
-- The return of the Linkis Restful interface follows the following standard return format:
-
-```json
-{
- "method": "",
- "status": 0,
- "message": "",
- "data": {}
-}
-```
-
-**Protocol**:
-
-- method: Returns the requested Restful API URI, which is mainly used in WebSocket mode.
-- status: returns status information, where: -1 means no login, 0 means success, 1 means error, 2 means verification failed, 3 means no access to the interface.
-- data: return specific data.
-- message: return the requested prompt message. If the status is not 0, the message returns an error message, and the data may have a stack field, which returns specific stack information.
- 
-For more information about the Linkis Restful interface specification, please refer to: [Linkis Restful Interface Specification](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/en_US/Development_Documents/Development_Specification/API.md)
-
-### 1). Login In
-
-- Interface `/api/rest_j/v1/user/login`
-
-- Submission method `POST`
-
-```json
-      {
-        "userName": "",
-        "password": ""
-      }
-```
-
-- Return to example
-
-```json
-    {
-        "method": null,
-        "status": 0,
-        "message": "login successful(登录成功)!",
-        "data": {
-            "isAdmin": false,
-            "userName": ""
-        }
-     }
-```
-
-Among them:
-
--isAdmin: Linkis only has admin users and non-admin users. The only privilege of admin users is to support viewing the historical tasks of all users in the Linkis management console.
-
-### 2). Login Out
-
-- Interface `/api/rest_j/v1/user/logout`
-
-- Submission method `POST`
-
-  No parameters
-
-- Return to example
-
-```json
-    {
-        "method": "/api/rest_j/v1/user/logout",
-        "status": 0,
-        "message": "退出登录成功!"
-    }
-```
-
-### 3). Heart Beat
-
-- Interface `/api/rest_j/v1/user/heartbeat`
-
-- Submission method `POST`
-
-  No parameters
-
-- Return to example
-
-```json
-    {
-         "method": "/api/rest_j/v1/user/heartbeat",
-         "status": 0,
-         "message": "维系心跳成功!"
-    }
-```
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/API_Documentations/README.md b/Linkis-Doc-master/en_US/API_Documentations/README.md
deleted file mode 100644
index 387b794..0000000
--- a/Linkis-Doc-master/en_US/API_Documentations/README.md
+++ /dev/null
@@ -1,8 +0,0 @@
-## 1. Document description
-Linkis1.0 has been refactored and optimized on the basis of Linkix0.x, and it is also compatible with the 0.x interface. However, in order to prevent compatibility problems when using version 1.0, you need to read the following documents carefully:
-
-1. When using Linkis1.0 for customized development, you need to use Linkis's authorization authentication interface. Please read [Login API Document](Login_API.md) carefully.
-
-2. Linkis1.0 provides a JDBC interface. You need to use JDBC to access Linkis. Please read [Task Submit and Execute JDBC API Document](JDBC_API.md).
-
-3. Linkis1.0 provides the Rest interface. If you need to develop upper-level applications on the basis of Linkis, please read [Task Submit and Execute Rest API Document](Linkis_task_submission_and_execution_RestAPI_document.md).
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md
deleted file mode 100644
index d600a5f..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md
+++ /dev/null
@@ -1,99 +0,0 @@
-EngineConn architecture design
-==================
-
-EngineConn: Engine connector, a module that provides functions such as unified configuration management, context service, physical library, data source management, micro service management, and historical task query for other micro service modules.
-
-EngineConn architecture diagram
-
-![EngineConn](../../../Images/Architecture/EngineConn/engineconn-01.png)
-
-Introduction to the second-level module:
-==============
-
-linkis-computation-engineconn interactive engine connector
----------------------------------------------
-
-The ability to provide interactive computing tasks.
-
-| Core class               | Core function                                                   |
-|----------------------|------------------------------------------------------------|
-| EngineConnTask       | Defines the interactive computing tasks submitted to EngineConn                     |
-| ComputationExecutor  | Defined interactive Executor, with interactive capabilities such as status query and task kill. |
-| TaskExecutionService | Provides management functions for interactive computing tasks                             |
-
-linkis-engineconn-common engine connector common module
---------------------------------------------
-
-Define the most basic entity classes and interfaces in the engine connector. EngineConn is used to create a connection session Session for the underlying computing storage engine, which contains the session information between the engine and the specific cluster, and is the client that communicates with the specific engine.
-
-| Core Service           | Core function                                                             |
-|-----------------------|----------------------------------------------------------------------|
-| EngineCreationContext | Contains the context information of EngineConn during startup                               |
-| EngineConn            | Contains the specific information of EngineConn, such as type, specific connection information with layer computing storage engine, etc. |
-| EngineExecution       | Provide Executor creation logic                                               |
-| EngineConnHook        | Define the operations before and after each phase of engine startup                                       |
-
-The core logic of linkis-engineconn-core engine connector
-------------------------------------------
-
-Defines the interfaces involved in the core logic of EngineConn.
-
-| Core class            | Core function                           |
-|-------------------|------------------------------------|
-| EngineConnManager | Provide related interfaces for creating and obtaining EngineConn |
-| ExecutorManager   | Provide related interfaces for creating and obtaining Executor   |
-| ShutdownHook      | Define the operation of the engine shutdown phase             |
-
-linkis-engineconn-launch engine connector startup module
-------------------------------------------
-
-Defines the logic of how to start EngineConn.
-
-| Core class           | core function                 |
-|------------------|--------------------------|
-| EngineConnServer | EngineConn microservice startup class |
-
-The core logic of the linkis-executor-core executor
-------------------------------------
-
->   Defines the core classes related to the actuator. The executor is a real computing scene executor, responsible for submitting user code to EngineConn.
-
-| Core class                 | Core function                                                   |
-|----------------------------|------------------------------------------------------------|
-| Executor | It is the actual computational logic execution unit and provides a top-level abstraction of the various capabilities of the engine. |
-| EngineConnAsyncEvent | Defines EngineConn-related asynchronous events |
-| EngineConnSyncEvent | Defines EngineConn-related synchronization events |
-| EngineConnAsyncListener | Defines EngineConn related asynchronous event listener |
-| EngineConnSyncListener | Defines EngineConn related synchronization event listener |
-| EngineConnAsyncListenerBus | Defines the listener bus for EngineConn asynchronous events |
-| EngineConnSyncListenerBus | Defines the listener bus for EngineConn synchronization events |
-| ExecutorListenerBusContext | Defines the context of the EngineConn event listener |
-| LabelService | Provide label reporting function |
-| ManagerService | Provides the function of information transfer with LinkisManager |
-
-linkis-callback-service callback logic
--------------------------------
-
-| Core Class         | Core Function |
-|--------------------|--------------------------|
-| EngineConnCallback | Define EngineConn's callback logic |
-
-linkis-accessible-executor can be accessed executor
---------------------------------------------
-
-Executor that can be accessed. You can interact with it through RPC requests to get its status, load, concurrency and other basic indicators Metrics data.
-
-
-| Core Class               | Core Function                                   |
-|--------------------------|-------------------------------------------------|
-| LogCache | Provide log cache function |
-| AccessibleExecutor | The Executor that can be accessed can interact with it through RPC requests. |
-| NodeHealthyInfoManager | Manage Executor's Health Information |
-| NodeHeartbeatMsgManager | Manage the heartbeat information of Executor |
-| NodeOverLoadInfoManager | Manage Executor load information |
-| Listener | Provides events related to Executor and the corresponding listener definition |
-| EngineConnTimedLock | Define Executor level lock |
-| AccessibleService | Provides the start-stop and status acquisition functions of Executor |
-| ExecutorHeartbeatService | Provides heartbeat related functions of Executor |
-| LockService | Provide lock management function |
-| LogService | Provide log management functions |
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-01.png b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-01.png
deleted file mode 100644
index cc83842..0000000
Binary files a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-02.png b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-02.png
deleted file mode 100644
index 303f37a..0000000
Binary files a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-02.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md
deleted file mode 100644
index 45ded41..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md
+++ /dev/null
@@ -1,45 +0,0 @@
-EngineConnManager architecture design
--------------------------
-
-EngineConnManager (ECM): EngineConn's manager, provides engine lifecycle management, and reports load information and its own health status to RM.
-###  ECM architecture
-
-![](Images/ECM-01.png)
-
-###  Introduction to the second-level module
-
-**Linkis-engineconn-linux-launch**
-
-The engine launcher, whose core class is LinuxProcessEngineConnLauch, is used to provide instructions for executing commands.
-
-**Linkis-engineconn-manager-core**
-
-The core module of ECM includes the top-level interface of ECM health report and EngineConn health report function, defines the relevant indicators of ECM service, and the core method of constructing EngineConn process.
-
-| Core top-level interface/class     | Core function                                                            |
-|------------------------------------|--------------------------------------------------------------------------|
-| EngineConn                         | Defines the properties of EngineConn, including methods and parameters   |
-| EngineConnLaunch                   | Define the start method and stop method of EngineConn                    |
-| ECMEvent                           | ECM related events are defined                                           |
-| ECMEventListener                   | Defined ECM related event listeners                                      |
-| ECMEventListenerBus                | Defines the listener bus of ECM                                          |
-| ECMMetrics                         | Defines the indicator information of ECM                                 |
-| ECMHealthReport                    | Defines the health report information of ECM                             |
-| NodeHealthReport                   | Defines the health report information of the node                        |
-
-**Linkis-engineconn-manager-server**
-
-The server side of ECM defines top-level interfaces and implementation classes such as ECM health information processing service, ECM indicator information processing service, ECM registration service, EngineConn start service, EngineConn stop service, EngineConn callback service, etc., which are mainly used for ECM to itself and EngineConn Life cycle management, health information reporting, heartbeat sending, etc.
-Core Service and Features module are as follows:
-
-| Core service                    | Core function                                        |
-|---------------------------------|-------------------------------------------------|
-| EngineConnLaunchService         | Contains core methods for generating EngineConn and starting the process          |
-| BmlResourceLocallizationService | Used to download BML engine related resources and generate localized file directory |
-| ECMHealthService                | Report your own healthy heartbeat to AM regularly                      |
-| ECMMetricsService               | Report your own indicator status to AM regularly                      |
-| EngineConnKillSerivce           | Provides related functions to stop the engine                          |
-| EngineConnListService           | Provide caching and management engine related functions                    |
-| EngineConnCallBackService       | Provide the function of the callback engine                              |
-
-
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md
deleted file mode 100644
index dc82f80..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md
+++ /dev/null
@@ -1,68 +0,0 @@
-EngineConnPlugin (ECP) architecture design
-===============================
-
-The engine connector plug-in is an implementation that can dynamically load the engine connector and reduce the occurrence of version conflicts. It has the characteristics of convenient expansion, fast refresh, and selective loading. In order to allow developers to freely extend Linkis's Engine engine, and dynamically load engine dependencies to avoid version conflicts, the EngineConnPlugin was designed and developed, allowing new engines to be introduced into the execution life cycle of [...]
-The plug-in interface disassembles the definition of the engine, including parameter initialization, allocation of engine resources, construction of engine connections, and setting of engine default tags.
-
-一、ECP architecture diagram
-
-![](../../../Images/Architecture/linkis-engineConnPlugin-01.png)
-
-Introduction to the second-level module:
-==============
-
-EngineConn-Plugin-Server
-------------------------
-
-The engine connector plug-in service is an entrance service that provides external registration plug-ins, management plug-ins, and plug-in resource construction. The engine plug-in that is successfully registered and loaded will contain the logic of resource allocation and startup parameter configuration. During the engine initialization process, EngineConn
-Other services such as Manager call the logic of the corresponding plug-in in Plugin Server through RPC requests.
-
-| Core Class                           | Core Function                              |
-|----------------------------------|---------------------------------------|
-| EngineConnLaunchService          | Responsible for building the engine connector launch request            |
-| EngineConnResourceFactoryService | Responsible for generating engine resources                      |
-| EngineConnResourceService        | Responsible for downloading the resource files used by the engine connector from BML |
-
-
-EngineConn-Plugin-Loader Engine Connector Plugin Loader
----------------------------------------
-
-The engine connector plug-in loader is a loader used to dynamically load the engine connector plug-ins according to request parameters, and has the characteristics of caching. The specific loading process is mainly composed of two parts: 1) Plug-in resources such as the main program package and program dependency packages are loaded locally (not open). 2) Plug-in resources are dynamically loaded from the local into the service process environment, for example, loaded into the JVM virtual [...]
-| Core Class                          | Core Function                                     |
-|---------------------------------|----------------------------------------------|
-| EngineConnPluginsResourceLoader | Load engine connector plug-in resources                       |
-| EngineConnPluginsLoader         | Load the engine connector plug-in instance, or load an existing one from the cache |
-| EngineConnPluginClassLoader     | Dynamically instantiate engine connector instance from jar              |
-
-EngineConn-Plugin-Cache engine plug-in cache module
-----------------------------------------
-
-Engine connector plug-in cache is a cache service specially used to cache loaded engine connectors, and supports the ability to read, update, and remove. The plug-in that has been loaded into the service process will be cached together with its class loader to prevent multiple loading from affecting efficiency; at the same time, the cache module will periodically notify the loader to update the plug-in resources. If changes are found, it will be reloaded and refreshed automatically Cache.
-
-| Core Class                      | Core Function                     |
-|-----------------------------|------------------------------|
-| EngineConnPluginCache       | Cache loaded engine connector instance |
-| RefreshPluginCacheContainer | Engine connector that refreshes the cache regularly     |
-
-EngineConn-Plugin-Core: Engine connector plug-in core module
----------------------------------------------
-
-The engine connector plug-in core module is the core module of the engine connector plug-in. Contains the implementation of the basic functions of the engine plug-in, such as the construction of the engine connector start command, the construction of the engine resource factory and the implementation of the core interface of the engine connector plug-in.
-| Core Class                  | Core Function                                                 |
-|-------------------------|----------------------------------------------------------|
-| EngineConnLaunchBuilder | Build Engine Connector Launch Request                                   |
-| EngineConnFactory       | Create Engine Connector                                           |
-| EngineConnPlugin        | The engine connector plug-in implements the interface, including resources, commands, and instance construction methods. |
-| EngineResourceFactory   | Engine Resource Creation Factory                                       |
-
-EngineConn-Plugins: Engine connection plugin collection
------------------------------------
-
-The engine connection plug-in collection is used to place the default engine connector plug-in library that has been implemented based on the plug-in interface defined by us. Provides the default engine connector implementation, such as jdbc, spark, python, shell, etc. Users can refer to the implemented cases based on their own needs to implement more engine connectors.
-| Core Class              | Core Function         |
-|---------------------|------------------|
-| engineplugin-jdbc   | jdbc engine connector   |
-| engineplugin-shell  | Shell engine connector  |
-| engineplugin-spark  | spark engine connector  |
-| engineplugin-python | python engine connector |
-
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md
deleted file mode 100644
index dd69274..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md
+++ /dev/null
@@ -1,33 +0,0 @@
-## 1. Background
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The Entrance module of the old version of Linkis is responsible for too much responsibilities, the management ability of the Engine is weak, and it is not easy to follow-up expansion, the AppManager module is newly extracted to complete the following responsibilities:  
-1. Add the AM module to move the engine management function previously done by Entrance to the AM module.
-2. AM needs to support operating Engine, including: adding, multiplexing, recycling, preheating, switching and other functions.
-3. Need to connect to the Manager module to provide Engine management functions: including Engine status maintenance, engine list maintenance, engine information, etc.
-4. AM needs to manage EM services, complete EM registration and forward the resource registration to RM.
-5. AM needs to be connected to the Label module, including the addition and deletion of EM/Engine, the label manager needs to be notified to update the label.
-6. AM also needs to dock the label module for label analysis, and need to obtain a list of serverInstances with a series of scores through a series of labels (How to distinguish between EM and Engine? the labels are completely different).
-7. Need to provide external basic interface: including the addition, deletion and modification of engine and engine manager, metric query, etc.  
-## Architecture diagram
-![AppManager03](./../../../../zh_CN/Images/Architecture/AppManager-03.png)  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As shown in the figure above: AM belongs to the AppManager module in LinkisMaster and provides services.  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;New engine application flow chart:  
-![AppManager02](./../../../../zh_CN/Images/Architecture/AppManager-02.png)  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;From the above engine life cycle flow chart, it can be seen that Entrance is no longer doing the management of the Engine, and the startup and management of the engine are controlled by AM.  
-## Architecture description
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;AppManager mainly includes engine service and EM service:
-Engine service includes all operations related to EngineConn, such as engine creation, engine reuse, engine switching, engine recycling, engine stopping, engine destruction, etc.
-EM service is responsible for information management of all EngineConnManager, and can perform service management on ECM online, including tag modification, suspension of ECM service, obtaining ECM instance information, obtaining ECM running engine information, killing ECM operation, and also according to EM Node information Query all EngineNodes, and also support searching by user, saving EM Node load information, node health information, resource usage information, etc.
-The new EngineConnManager and EngineConn both support tag management, and the types of engines have also added offline, streaming, and interactive support.  
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Engine creation: specifically responsible for the new engine function of the LinkisManager service. The engine startup module is fully responsible for the creation of a new engine, including obtaining ECM tag collections, resource requests, obtaining engine startup commands, notifying ECM to create new engines, updating engine lists, etc.
-CreateEngienRequest->RPC/Rest -> MasterEventHandler ->CreateEngineService ->
-->LabelContext/EnginePlugin/RMResourcevice->(RcycleEngineService)EngineNodeManager->EMNodeManager->sender.ask(EngineLaunchRequest)->EngineManager service->EngineNodeManager->EngineLocker->Engine->EngineNodeManager->EngineFactory=&gt;EngineService=&gt; ServerInstance
-When creating an engine is the part that interacts with RM, EnginePlugin should return specific resource types through Labels, and then AM sends resource requests to RM.
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Engine reuse: In order to reduce the time and resources consumed for engine startup, the principle of reuse must be given priority to the use of engines. Reuse generally refers to the reuse of engines that users have created. The engine reuse module is responsible for providing a collection of reusable engines. Election and lock the engine and start using it, or return that there is no engine that can be reused.
-ReuseEngienRequest->RPC/Rest -> MasterEventHandler ->ReuseEngineService ->
-->abelContext->EngineNodeManager->EngineSelector->EngineLocker->Engine->EngineNodeManager->EngineReuser->EngineService=>ServerInstance
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Engine switching: It mainly refers to the label switching of existing engines. For example, when the engine is created, it was created by Creator1. Now it can be changed to Creator2 by engine switching. At this time, you can allow the current engine to receive tasks with the tag Creator2.
-SwitchEngienRequest->RPC/Rest -> MasterEventHandler ->SwitchEngineService ->LabelContext/EnginePlugin/RMResourcevice->EngineNodeManager->EngineLocker->Engine->EngineNodeManager->EngineReuser->EngineService=>ServerInstance.  
-Engine manager: Engine manager is responsible for managing the basic information and metadata information of all engines.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md
deleted file mode 100644
index d8fa39c..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md
+++ /dev/null
@@ -1,38 +0,0 @@
-## LabelManager architecture design
-
-#### Brief description
-LabelManager is a functional module in Linkis that provides label services to upper-level applications. It uses label technology to manage cluster resource allocation, service node election, user permission matching, and gateway routing and forwarding; it includes generalized analysis and processing tools that support various custom Label labels, And a universal tag matching scorer.
-### Overall architecture schematic
-
-![整体架构示意图](../../../Images/Architecture/LabelManager/label_manager_global.png)  
-
-#### Architecture description
-- LabelBuilder: Responsible for the work of label analysis. It can parse the input label type, keyword or character value to obtain a specific label entity. There is a default generalization implementation class or custom extensions.
-- LabelEntities: Refers to a collection of label entities, including cluster labels, configuration labels, engine labels, node labels, routing labels, search labels, etc.
-- NodeLabelService: The associated service interface class of instance/node and label, which defines the interface method of adding, deleting, modifying and checking the relationship between the two and matching the instance/node according to the label.
-- UserLabelService: Declare the associated operation between the user and the label.
-- ResourceLabelService: Declare the associated operations of cluster resources and labels, involving resource management of combined labels, cleaning or setting the resource value associated with the label.
-- NodeLabelScorer: Node label scorer, corresponding to the implementation of different label matching algorithms, using scores to indicate node label matching.
-
-### 1. LabelBuilder parsing process
-Take the generic label analysis class GenericLabelBuilder as an example to clarify the overall process:
-The process of label parsing/construction includes several steps:
-1. According to the input, select the appropriate label class to be parsed.
-2. According to the definition information of the tag class, recursively analyze the generic structure to obtain the specific tag value type.
-3. Convert the input value object to the tag value type, using implicit conversion or positive and negative analysis framework.
-4. According to the return of 1-3, instantiate the label, and perform some post operations according to different label classes.
-
-### 2. NodeLabelScorer scoring process
-In order to select a suitable engine node based on the tag list attached to the Linkis user execution request, it is necessary to make a selection of the matching engine list, which is quantified as the tag matching degree of the engine node, that is, the score.
-In the label definition, each label has a feature value, namely CORE, SUITABLE, PRIORITIZED, OPTIONAL, and each feature value has a boost value, which is equivalent to a weight and an incentive value.
-At the same time, some features such as CORE and SUITABLE must be unique features, that is, strong filtering is required during the matching process, and a node can only be associated with one CORE/SUITABLE label.
-According to the relationship between existing tags, nodes, and request attached tags, the following schematic diagram can be drawn:
-![标签打分](../../../Images/Architecture/LabelManager/label_manager_scorer.png)  
-
-The built-in default scoring logic process should generally include the following steps:
-1. The input of the method should be two sets of network relationship lists, namely `Label -> Node` and `Node -> Label`, where the Node node in the `Node -> Label` relationship must have all the CORE and SUITABLE feature labels, these nodes are also called candidate nodes.
-2. The first step is to traverse and calculate the relationship list of `Node -> Label`, and traverse the label Label associated with each node. In this step, the label is scored first. If the label is not the label attached to the request, the score is 0.
-Otherwise, the score is divided into: (basic score/the number of times the tag corresponds to the feature value in the request) * the incentive value of the corresponding feature value, where the basic score defaults to 1, and the initial score of the node is the sum of the associated tag scores; where because The CORE/SUITABLE type label must be the only label, and the number of occurrences is always 1.
-3. After obtaining the initial score of the node, the second step is to traverse the calculation of the `Label -> Node` relationship. Since the first step ignores the effect of unrequested attached labels on the score, the proportion of irrelevant labels will indeed affect the score. This type of label is unified with the UNKNOWN feature, and this feature also has a corresponding incentive value;
-We set that the higher the proportion of candidate nodes associated with irrelevant labels in the total associated nodes, the more significant the impact on the score, which can further accumulate the initial score of the node obtained in the first step.
-4. Normalize the standard deviation of the scores of the candidate nodes and sort them.
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md
deleted file mode 100644
index d13e6b1..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md
+++ /dev/null
@@ -1,41 +0,0 @@
-LinkisManager Architecture Design
-====================
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As an independent microservice of Linkis, LinkisManager provides AppManager (application management), ResourceManager (resource management), and LabelManager (label management) capabilities. It can support multi-active deployment and has the characteristics of high availability and easy expansion.  
-## 1. Architecture Diagram
-![Architecture Diagram](./../../../../zh_CN/Images/Architecture/LinkisManager/LinkisManager-01.png)  
-### Noun explanation
-- EngineConnManager (ECM): Engine Manager, used to start and manage engines.
-- EngineConn (EC): Engine connector, used to connect the underlying computing engine.
-- ResourceManager (RM): Resource Manager, used to manage node resources.
-## 2. Introduction to the second-level module
-### 2.1. Application management module linkis-application-manager
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;AppManager is used for unified scheduling and management of engines:  
-| Core Interface/Class | Main Function |
-|------------|--------|
-|EMInfoService | Defines EngineConnManager information query and modification functions |
-|EMRegisterService| Defines EngineConnManager registration function |
-|EMEngineService | Defines EngineConnManager's creation, query, and closing functions of EngineConn |
-|EngineAskEngineService | Defines the function of querying EngineConn |
-|EngineConnStatusCallbackService | Defines the function of processing EngineConn status callbacks |
-|EngineCreateService | Defines the function of creating EngineConn |
-|EngineInfoService | Defines EngineConn query function |
-|EngineKillService | Defines the stop function of EngineConn |
-|EngineRecycleService | Defines the recycling function of EngineConn |
-|EngineReuseService | Defines the reuse function of EngineConn |
-|EngineStopService | Defines the self-destruct function of EngineConn |
-|EngineSwitchService | Defines the engine switching function |
-|AMHeartbeatService | Provides EngineConnManager and EngineConn node heartbeat processing functions |
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The process of applying for an engine through AppManager is as follows:  
-![AppManager](./../../../../zh_CN/Images/Architecture/LinkisManager/AppManager-01.png)  
-### 2. Label management module linkis-label-manager
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;LabelManager provides label management and analysis capabilities.  
-| Core Interface/Class | Main Function |
-|------------|--------|
-|LabelService | Provides the function of adding, deleting, modifying and checking labels |
-|ResourceLabelService | Provides resource label management functions |
-|UserLabelService | Provides user label management functions |  
-The LabelManager architecture diagram is as follows:  
-![ResourceManager](./../../../../zh_CN/Images/Architecture/LinkisManager/ResourceManager-01.png)  
-### 4. Monitoring module linkis-manager-monitor
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Monitor provides the function of node status monitoring.
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md
deleted file mode 100644
index cf1b2c9..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md
+++ /dev/null
@@ -1,132 +0,0 @@
-## 1. Background
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ResourceManager (RM for short) is the computing resource management module of Linkis. All EngineConn (EC for short), EngineConnManager (ECM for short), and even external resources including Yarn are managed by RM. RM can manage resources based on users, ECM, or other granularities defined by complex tags.  
-## 2. The role of RM in Linkis
-![01](./../../../../zh_CN/Images/Architecture/rm-01.png)  
-![02](./../../../../zh_CN/Images/Architecture/rm-02.png)  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As a part of Linkis Manager, RM mainly functions as follows: maintain the available resource information reported by ECM, process the resource application submitted by ECM, record the actual resource usage information reported by EC in real time during the life cycle after successful application, and provide query current resource usage The relevant interface of the situation.  
-In Linkis, other services that interact with RM mainly include:  
-1. Engine Manager, ECM for short: Processes the microservices that start the engine connector request. As a resource provider, ECM is responsible for registering and unregistering resources with RM. At the same time, as the manager of the engine, ECM is responsible for applying for resources from RM instead of the new engine connector that is about to start. For each ECM instance, there is a corresponding resource record in the RM, which contains information such as the total resources a [...]
-![03](./../../../../zh_CN/Images/Architecture/rm-03.png)  
-2. The engine connector, referred to as EC, is the actual execution unit of user operations. At the same time, as the actual user of the resource, the EC is responsible for reporting the actual use of the resource to the RM. Each EC has a corresponding resource record in the RM: during the startup process, it is reflected as a locked resource; during the running process, it is reflected as a used resource; after being terminated, the resource record is subsequently deleted.  
-![04](./../../../../zh_CN/Images/Architecture/rm-04.png)  
-## 3. Resource type and format
-![05](./../../../../zh_CN/Images/Architecture/rm-05.png)  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As shown in the figure above, all resource classes implement a top-level Resource interface, which defines the calculation and comparison methods that all resource classes need to support, and overloads the corresponding mathematical operators to enable resources to be Directly calculated and compared like numbers.  
-| Operator | Correspondence Method | Operator | Correspondence Method |
-|--------|-------------|--------|-------------|
-| \+ | add | \> | moreThan |
-| \- | minus | \< | lessThan |
-| \* | multiply | = | equals |
-| / | divide | \>= | notLessThan |
-| \<= | notMoreThan | | |  
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The currently supported resource types are shown in the following table. All resources have corresponding json serialization and deserialization methods, which can be stored in json format and transmitted across the network:  
-
-| Resource Type | Description |
-|-----------------------|--------------------------------------------------------|
-| MemoryResource | Memory Resource |
-| CPUResource | CPU Resource |
-| LoadResource | Both memory and CPU resources |
-| YarnResource | Yarn queue resources (queue, queue memory, queue CPU, number of queue instances) |
-| LoadInstanceResource | Server resources (memory, CPU, number of instances) |
-| DriverAndYarnResource | Driver and executor resources (with server resources and Yarn queue resources at the same time) |
-| SpecialResource | Other custom resources |  
-
-## 4. Available resource management
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The available resources in the RM mainly come from two sources: the available resources reported by the ECM, and the resource limits configured according to tags in the Configuration module.  
-**ECM resource report**:  
-1. When the ECM is started, it will broadcast the ECM registration message. After receiving the message, the RM will register the resource according to the content contained in the message. The resource-related content includes:
-
-     1. Total resources: the total number of resources that the ECM can provide.
-
-     2. Protect resources: When the remaining resources are less than this resource, no further resources are allowed to be allocated.
-
-     3. Resource type: such as LoadResource, DriverAndYarnResource and other type names.
-
-     4. Instance information: machine name plus port name.
-
-2. After RM receives the resource registration request, it adds a record in the resource table, the content is consistent with the parameter information of the interface, and finds the label representing the ECM through the instance information, and adds an association in the resource and label association table recording.
-
-3. When the ECM is closed, it will broadcast a message that the ECM is closed. After receiving the message, the RM will go offline according to the ECM instance information in the message, that is, delete the resource and associated records corresponding to the ECM instance tag.  
-
-**Configuration模块标签资源配置**:  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;In the Configuration module, users can configure the number of resources based on different tag combinations, such as limiting the maximum available resources of the User/Creator/EngineType combination.
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The RM queries the Configuration module for resource information through the RPC message, using the combined tag as the query condition, and converts it into a Resource object to participate in subsequent comparison and recording.  
-
-## 5. Resource Usage Management  
-**Receive user's resource application:**  
-1. When LinkisManager receives a request to start EngineConn, it will call RM's resource application interface to apply for resources. The resource application interface accepts an optional time parameter. When the waiting time for applying for a resource exceeds the limit of the time parameter, the resource application will be automatically processed as a failure.  
-**Judging whether there are enough resources:**  
-That is, to determine whether the remaining available resources are greater than the requested resources, if greater than or equal to, the resources are sufficient; otherwise, the resources are insufficient.
-
-1. RM preprocesses the label information attached to the resource application, and filters, combines and converts the original labels according to the rules (such as combining the User/Creator label and EngineType label), which makes the subsequent resource judgment more granular flexible.
-
-2. Lock each converted label one by one, so that their corresponding resource records remain unchanged during the processing of resource applications.
-
-3. According to each label:
-
-    1. Query the corresponding resource record from the database through the Persistence module. If the record contains the remaining available resources, it is directly used for comparison.
-
-    2. If there is no direct remaining available resource record, it will be calculated by the formula of [Remaining Available Resource=Maximum Available Resource-Used Resource-Locked Resource-Protected Resource].
-
-    3. If there is no maximum available resource record, request the Configuration module to see if there is configured resource information, if so, use the formula for calculation, if not, skip the resource judgment for this tag.
-
-    4. If there is no resource record, skip the resource judgment for this tag.
-
-4. As long as one tag is judged to be insufficient in resources, the resource application will fail, and each tag will be unlocked one by one.
-
-5. Only when all tags are judged to be sufficient resources, can the resource application be successfully passed and proceed to the next step.  
-
-**lock by application of resources:**
-
-1. The number of resource request by generating a new record in the resource table, and associated with each tag.
-
-2. If there is a tag corresponding to the remaining available resource record, the corresponding number of the abatement.
-
-3. Generate a timed task, the lock checks whether these resources are actually used after a certain time, if the timeout is not used, it is mandatory recycling.
-
-4. unlock each tag.
-
-**report the actual use of resources:**
-
-1. EngineConn after the start, broadcast a resource utilization message. RM after receiving the message, check whether the label corresponding to the EngineConn lock resource record, and if not, an error.
-
-2. If you have locked resource, the EngineConn all labels associated lock.
-
-3. For each tag, the resource record corresponding lock record for the conversion of used resources.
-
-4. Unlock all labels.
-
-**Release actual used resources:**
-
-1. EngineConn after the end of the life cycle, recycling broadcast messages. RM after receiving the message, check whether the EngineConn corresponding label resources have been recorded.
-
-2. If so, all the labels associated EngineConn be locked.
-
-3, minus the amount used in the corresponding resource record for each label.
-
-4. If there is a tag corresponding to the remaining available resource record, the corresponding increase in number.
-
-5. The unlocking each tag
-
-## 6. External resource management
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;In RM, in order to classify resources more flexibly and expansively, support multi-cluster resource management and control, and at the same time make it easier to access new external resources, the following considerations have been made in the design:
-
-1. Unified management of resources through tags. After the resource is registered, it is associated with the tag, so that the attributes of the resource can be expanded infinitely. At the same time, resource applications are also tagged to achieve flexible matching.
-
-2. Abstract the cluster into one or more tags, and maintain the environmental information corresponding to each cluster tag in the external resource management module to achieve dynamic docking.
-
-3. Abstract a general external resource management module. If you need to access new external resource types, you can convert different types of resource information into Resource entities in the RM as long as you implement a fixed interface to achieve unified management.  
-![06](./../../../../zh_CN/Images/Architecture/rm-06.png)  
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Other modules of RM obtain external resource information through the interface provided by ExternalResourceService.
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The ExternalResourceService obtains information about external resources through resource types and tags:
-
-1. The type, label, configuration and other attributes of all external resources (such as cluster name, Yarn web
-     url, Hadoop version and other information) are maintained in the linkis\_external\_resource\_provider table.
-
-2. For each resource type, there is an implementation of the ExternalResourceProviderParser interface, which parses the attributes of external resources, converts the information that can be matched to the Label into the corresponding Label, and converts the information that can be used as a parameter to request the resource interface into params . Finally, an ExternalResourceProvider instance that can be used as a basis for querying external resource information is constructed.
-
-3. According to the resource type and label information in the parameters of the ExternalResourceService method, find the matching ExternalResourceProvider, generate an ExternalResourceRequest based on the information in it, and formally call the API provided by the external resource to initiate a resource information request.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/README.md
deleted file mode 100644
index 343b7b2..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/README.md
+++ /dev/null
@@ -1,40 +0,0 @@
-## Background
-**The architecture of Linkis0.X mainly has the following problems**  
-1. The boundary between the core processing flow and the hierarchical module is blurred:  
-- Entrance and EngineManager function boundaries are blurred.
-
-- The main process of task submission and execution is not clear enough.
-
-- It is troublesome to extend the new engine, and it needs to implement the code of multiple modules.
-
-- Only support computing request scenarios, storage request scenarios and resident service mode (Cluster) are difficult to support.  
-2. Demands for richer and more powerful computing governance functions:  
-- Insufficient support for computing task management strategies.
-
-- The labeling capability is not strong enough, which restricts computing strategies and resource managemen.  
-
-The new architecture of Linkis1.0 computing governance service can solve these problems well.  
-## Architecture Diagram  
-![linkis Computation Gov](./../../../zh_CN/Images/Architecture/linkis-computation-gov-01.png)  
-**Operation process optimization:** Linkis1.0 will optimize the overall execution process of the job, from submission —\> preparation —\>
-Perform three stages to fully upgrade Linkis's Job execution architecture, as shown in the following figure:  
-![](./../../../zh_CN/Images/Architecture/linkis-computation-gov-02.png)  
-## Architecture Description
-### 1. Entrance
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Entrance, as the submission portal for computing tasks, provides task reception, scheduling and job information forwarding capabilities. It is a native capability split from Linkis0.X's Entrance.  
-[Entrance Architecture Design](./Entrance/Entrance.md)  
-### 2. Orchestrator
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Orchestrator, as the entrance to the preparation phase, inherits the capabilities of parsing Jobs, applying for Engines, and submitting execution from Entrance of Linkis0.X; at the same time, Orchestrator will provide powerful orchestration and computing strategy capabilities to meet multiple activities, active backups, transactions, and replays. , Current limiting, heterogeneous and mixed computing and other application scenarios.  
-[Enter Orchestrator Architecture Design](../Orchestrator/README.md)  
-### 3. LinkisManager
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As the management brain of Linkis, LinkisManager is mainly composed of AppManager, ResourceManager, LabelManager and EngineConnPlugin.  
-1. ResourceManager not only has Linkis0.X's resource management capabilities for Yarn and Linkis EngineManager, but also provides tag-based multi-level resource allocation and recycling capabilities, allowing ResourceManager to have full resource management capabilities across clusters and across computing resource types;
-2. AppManager will coordinate and manage all EngineConnManager and EngineConn. The life cycle of EngineConn application, reuse, creation, switching, and destruction will be handed over to AppManager for management; and LabelManager will provide cross-IDC and cross-cluster based on multi-level combined tags EngineConn and EngineConnManager routing and management capabilities;
-3. EngineConnPlugin is mainly used to reduce the access cost of new computing storage, so that users can access a new computing storage engine only by implementing one class.  
- [Enter LinkisManager Architecture Design](./LinkisManager/README.md)  
-### 4. Engine Manager
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Engine conn Manager (ECM) is a simplified and upgraded version of linkis0. X engine manager. The ECM under linkis1.0 removes the application ability of the engine, and the whole microservice is completely stateless. It will focus on supporting the startup and destruction of all kinds of enginecon.  
-[Enter EngineConnManager Architecture Design](./EngineConnManager/README.md)  
- ### 5. EngineConn
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConn is an optimized and upgraded version of Linkis0.X Engine. It will provide EngineConn and Executor two modules. EngineConn is used to connect the underlying computing storage engine and provide a session session that connects the underlying computing storage engines; Executor is based on this Session session , Provide full-stack computing support for interactive computing, streaming computing, offline computing, and data storage.  
- [Enter EngineConn Architecture Design](./EngineConn/README.md)
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/DifferenceBetween1.0&0.x.md b/Linkis-Doc-master/en_US/Architecture_Documents/DifferenceBetween1.0&0.x.md
deleted file mode 100644
index 0965b0c..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/DifferenceBetween1.0&0.x.md
+++ /dev/null
@@ -1,50 +0,0 @@
-## 1. Brief Description
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;First of all, the Entrance and EngineConnManager (formerly EngineManager) services under the Linkis1.0 architecture are completely unrelated to the engine. That is, under the Linkis1.0 architecture, each engine does not need to be implemented and started the corresponding Entrance and EngineConnManager, and Linkis1.0’s Each Entrance and EngineConnManager can be shared by all engines.  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Secondly, Linkis1.0 added the Linkis-Manager service to provide external AppManager (application management), ResourceManager (resource management, the original ResourceManager service) and LabelManager (label management) capabilities.  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Then, in order to reduce the difficulty of implementing and deploying a new engine, Linkis 1.0 re-architects a module called EngineConnPlugin. Each new engine only needs to implement the EngineConnPlugin interface.Linkis EngineConnPluginServer supports dynamic loading of EngineConnPlugin (new engine) in the form of a plug-in. Once EngineConnPluginServer is successfully loaded, EngineConnManager can quickly start an instance of the engine fo [...]
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Finally, all the microservices of Linkis are summarized and classified, which are generally divided into three major levels: public enhancement services, computing governance services and microservice governance services, from the code hierarchy, microservice naming and installation directory structure, etc. To standardize the microservice system of Linkis1.0.  
-##  2. Main Feature
-1. **Strengthen computing governance**, Linkis 1.0 mainly strengthens the comprehensive management and control capabilities of computing governance from engine management, label management, ECM management, and resource management. It is based on the powerful management and control design concept of labeling. This makes Linkis 1.0 a solid step towards multi-IDC, multi-cluster, and multi-container.  
-2. **Simplify user implementation of new engines**, EnginePlugin is used to integrate the related interfaces and classes that need to be implemented to implement a new engine, as well as the Entrance-EngineManager-Engine three-tier module system that needs to be split into one interface. , Simplify the process and code for users to implement the new engine, so that as long as one class is implemented, a new engine can be connected.  
-3. **Full-stack computing storage engine support**, to achieve full coverage support for computing request scenarios (such as Spark), storage request scenarios (such as HBase), and resident cluster services (such as SparkStreaming).  
-4. **Improved advanced computing strategy capability**, add Orchestrator to implement rich computing task management strategies, and support tag-based analysis and orchestration.  
-## 3. Service Comparison
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Please refer to the following two pictures:  
-![Linkis0.X Service List](./../Images/Architecture/Linkis0.X-services-list.png)  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The list of Linkis1.0 microservices is as follows:  
-![Linkis1.0 Service List](./../Images/Architecture/Linkis1.0-services-list.png)  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;From the above two figures, Linkis1.0 divides services into three types of services: Computing Governance (CG)/Micro Service Governance (MG)/Public Enhanced Service (PS). among them:  
-1. A major change in computing governance is that Entrance and EngineConnManager services are no longer related to engines. To implement a new engine, only the EngineConnPlugin plug-in needs to be implemented. EngineConnPluginServer will dynamically load the EngineConnPlugin plug-in to achieve engine hot-plug update;
-2. Another major change in computing governance is that LinkisManager, as the management brain of Linkis, abstracts and defines AppManager (application management), ResourceManager (resource management) and LabelManager (label management);
-3. Microservice management service, merged and unified the Eureka and Gateway services in the 0.X part, and enhanced the functions of the Gateway service to support routing and forwarding according to Label;
-4. Public enhancement services, mainly to optimize and unify the BML services/context services/data source services/public services of the 0.X part, which is convenient for everyone to manage and view.  
-## 4. Introduction To Linkis Manager
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As the management brain of Linkis, Linkis Manager is mainly composed of AppManager, ResourceManager and LabelManager.  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ResourceManager not only has Linkis0.X's resource management capabilities for Yarn and Linkis EngineManager, but also provides tag-based multi-level resource allocation and recycling capabilities, allowing ResourceManager to have full resource management capabilities across clusters and across computing resource types.  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;AppManager will coordinate and manage all EngineConnManager and EngineConn, and the life cycle of EngineConn application, reuse, creation, switching, and destruction will be handed over to AppManager for management.  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The LabelManager will provide cross-IDC and cross-cluster EngineConn and EngineConnManager routing and management capabilities based on multi-level combined tags.  
-## 5. Introduction To Linkis EngineConnPlugin
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConnPlugin is mainly used to reduce the cost of access and deployment of new computing storage. It truly enables users to “just need to implement a class to connect to a new computing storage engine; just execute a script to quickly deploy a new engine ".  
-### 5.1 New Engine Implementation Comparison
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The following are the relevant interfaces and classes that the user Linkis0.X needs to implement to implement a new engine:  
-![Linkis0.X How to implement a brand new engine](./../Images/Architecture/Linkis0.X-NewEngine-architecture.png)  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The following is Linkis 1.0.0, which implements a new engine, the interfaces and classes that users need to implement:  
-![Linkis1.0 How to implement a brand new engine](./../Images/Architecture/Linkis1.0-NewEngine-architecture.png)  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Among them, EngineConnResourceFactory and EngineLaunchBuilder are not required to implement interfaces, and only EngineConnFactory is required to implement interfaces.  
-### 5.2 New engine startup process
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConnPlugin provides the Server service to start and load all engine plug-ins. The following is a new engine startup that accesses the entire process of EngineConnPlugin-Server:  
-![Linkis Engine start process](./../Images/Architecture/Linkis1.0-newEngine-initialization.png)  
-## 6. Introduction To Linkis EngineConn
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConn, the original Engine module, is the actual unit for Linkis to connect and interact with the underlying computing storage engine, and is the basis for Linkis to provide computing and storage capabilities.  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConn of Linkis1.0 is mainly composed of EngineConn and Executor. among them:  
-
-1. EngineConn is the connector, which contains the session information between the engine and the specific cluster. It only acts as a connection, a client, and does not actually perform calculations.  
-
-2. Executor is the executor. As a real computing scene executor, it is the actual computing logic execution unit, and it also abstracts various specific capabilities of the engine, such as providing various services such as locking, access status, and log acquisition.
-
-3. Executor is created by the session information in EngineConn. An engine type can support multiple different types of computing tasks, each corresponding to the implementation of an Executor, and the computing task will be submitted to the corresponding Executor for execution.  In this way, the same engine can provide different services according to different computing scenarios. For example, the permanent engine does not need to be locked after it is started, and the one-time engine d [...]
-
-4. The advantage of using the separation of Executor and EngineConn is that it can avoid the Receiver coupling business logic, and only retains the RPC communication function. Distribute services in multiple Executor modules, and abstract them into several categories of engines: interactive computing engines, streaming engines, disposable engines, etc., which may be used, and build a unified engine framework for later expansion.
-In this way, different types of engines can respectively load the required capabilities according to their needs, which greatly reduces the redundancy of engine implementation.  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As shown below:  
-![Linkis EngineConn Architecture diagram](./../Images/Architecture/Linkis1.0-EngineConn-architecture.png)
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/How_to_add_an_EngineConn.md b/Linkis-Doc-master/en_US/Architecture_Documents/How_to_add_an_EngineConn.md
deleted file mode 100644
index c28635b..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/How_to_add_an_EngineConn.md
+++ /dev/null
@@ -1,105 +0,0 @@
-# How to add an EngineConn
-
-Adding EngineConn is one of the core processes of the computing task preparation phase of Linkis computing governance. It mainly includes the following steps. First, client side (Entrance or user client) initiates a request for a new EngineConn to LinkisManager . Then LinkisManager initiates a request to EngineConnManager to start EngineConn based on demands and label rules. Finally,  LinkisManager returns the usable EngineConn to the client side.
-
-Based on the figure below, let's explain the whole process in detail:
-
-![Process of adding a EngineConn](../Images/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png)
-
-## 1. LinkisManger receives the requests from client side
-
-**Glossary:**
-
-- LinkisManager: The management center of Linkis computing governance capabilities. Its main responsibilities are:
-  1. Based on multi-level combined tags, provide users with available EngineConn after complex routing, resource management and load balancing.
-
-  2. Provide EC and ECM full life cycle management capabilities.
-
-  3. Provide users with multi-Yarn cluster resource management functions based on multi-level combined tags. It is mainly divided into three modules: AppManager, ResourceManager and LabelManager , which can support multi-active deployment and have the characteristics of high availability and easy expansion.
-
-After the AM module receives the Client’s new EngineConn request, it first checks the parameters of the request to determine the validity of the request parameters. Secondly, selects the most suitable EngineConnManager (ECM) through complex rules for use in the subsequent EngineConn startup. Next, it will apply to RM for the resources needed to start the EngineConn, Finally, it will request the ECM to create an EngineConn.
-
-The four steps will be described in detail below.
-
-### 1. Request parameter verification
-
-After the AM module receives the engine creation request, it will check the parameters. First, it will check the permissions of the requesting user and the creating user, and then check the Label attached to the request. Since in the subsequent creation process of AM, Label will be used to find ECM and perform resource information recording, etc, you need to ensure that you have the necessary Label. At this stage, you must bring the Label with UserCreatorLabel (For example: hadoop-IDE) a [...]
-
-### 2. Select  a EngineConnManager(ECM)
-
-ECM selection is mainly to complete the Label passed through the client to select a suitable ECM service to start EngineConn. In this step, first, the LabelManager will be used to search in the registered ECM through the Label passed by the client, and return in the order according to the label matching degree. After obtaining the registered ECM list, rules will be selected for these ECMs. At this stage, rules such as availability check, resource surplus, and machine load have been imple [...]
-
-### 3. Apply resources required for EngineConn
-
-1. After obtaining the assigned ECM, AM will then request how many resources will be used by the client's engine creation request by calling the EngineConnPluginServer service. Here, the resource request will be encapsulated, mainly including Label, the EngineConn startup parameters passed by the Client, and the user configuration parameters obtained from the Configuration module. The resource information is obtained by calling the ECP service through RPC.
-
-2. After the EngineConnPluginServer service receives the resource request, it will first find the corresponding engine tag through the passed tag, and select the EngineConnPlugin of the corresponding engine through the engine tag. Then use EngineConnPlugin's resource generator to calculate the engine startup parameters passed in by the client, calculate the resources required to apply for a new EngineConn this time, and then return it to LinkisManager. 
-
-   **Glossary:**
-
-- EgineConnPlugin: It is the interface that Linkis must implement when connecting a new computing storage engine. This interface mainly includes several capabilities that this EngineConn must provide during the startup process, including EngineConn resource generator, EngineConn startup command generator, EngineConn engine connection Device. Please refer to the Spark engine implementation class for the specific implementation: [SparkEngineConnPlugin](https://github.com/WeBankFinTech/Link [...]
-- EngineConnPluginServer: It is a microservice that loads all the EngineConnPlugins and provides externally the required resource generation capabilities of EngineConn and EngineConn's startup command generation capabilities.
-- EngineConnResourceFactory: Calculate the total resources needed when EngineConn starts this time through the parameters passed in.
-- EngineConnLaunchBuilder: Through the incoming parameters, a startup command of the EngineConn is generated to provide the ECM to start the engine.
-3. After AM obtains the engine resources, it will then call the RM service to apply for resources. The RM service will use the incoming Label, ECM, and the resources applied for this time to make resource judgments. First, it will judge whether the resources of the client corresponding to the Label are sufficient, and then judge whether the resources of the ECM service are sufficient, if the resources are sufficient, the resource application is approved this time, and the resources of th [...]
-
-### 4. Request ECM for engine creation
-
-1. After completing the resource application for the engine, AM will encapsulate the engine startup request, send it to the corresponding ECM via RPC for service startup, and obtain the instance object of EngineConn.
-2. AM will then determine whether EngineConn is successfully started and become available through the reported information of EngineConn. If it is, the result will be returned, and the process of adding an engine this time will end.
-
-## 2. ECM initiates EngineConn
-
-**Glossary:**
-
-- EngineConnManager: EngineConn's manager. Provides engine life-cycle management, and at the same time reports load information and its own health status to RM.
-- EngineConnBuildRequest: The start engine command passed by LinkisManager to ECM, which encapsulates all tag information, required resources and some parameter configuration information of the engine.
-- EngineConnLaunchRequest: Contains the BML materials, environment variables, ECM required local environment variables, startup commands and other information required to start an EngineConn, so that ECM can build a complete EngineConn startup script based on this.
-
-After ECM receives the EngineConnBuildRequest command passed by LinkisManager, it is mainly divided into three steps to start EngineConn: 
-
-1. Request EngineConnPluginServer to obtain EngineConnLaunchRequest encapsulated by EngineConnPluginServer. 
-2.  Parse EngineConnLaunchRequest and encapsulate it into EngineConn startup script.
-3.  Execute startup script to start EngineConn.
-
-### 2.1 EngineConnPluginServer encapsulates EngineConnLaunchRequest
-
-Get the EngineConn type and corresponding version that actually needs to be started through the label information of EngineConnBuildRequest, get the EngineConnPlugin of the EngineConn type from the memory of EngineConnPluginServer, and convert the EngineConnBuildRequest into EngineConnLaunchRequest through the EngineConnLaunchBuilder of the EngineConnPlugin.
-
-### 2.2 Encapsulate EngineConn startup script
-
-After the ECM obtains the EngineConnLaunchRequest, it downloads the BML materials in the EngineConnLaunchRequest to the local, and checks whether the local necessary environment variables required by the EngineConnLaunchRequest exist. After the verification is passed, the EngineConnLaunchRequest is encapsulated into an EngineConn startup script.
-
-### 2.3 Execute startup script
-
-Currently, ECM only supports Bash commands for Unix systems, that is, only supports Linux systems to execute the startup script.
-
-Before startup, the sudo command is used to switch to the corresponding requesting user to execute the script to ensure that the startup user (ie, JVM user) is the requesting user on the Client side.
-
-After the startup script is executed, ECM will monitor the execution status and execution log of the script in real time. Once the execution status returns to non-zero, it will immediately report EngineConn startup failure to LinkisManager and the entire process is complete; otherwise, it will keep monitoring the log and status of the startup script until The script execution is complete.
-
-## 3. EngineConn initialization
-
-After ECM executed EngineConn's startup script, EngineConn microservice was officially launched.
-
-**Glossary:**
-
-- EngineConn microservice: Refers to the actual microservices that include an EngineConn and one or more Executors to provide computing power for computing tasks. When we talk about adding an EngineConn, we actually mean adding an EngineConn microservice.
-- EngineConn: The engine connector is the actual connection unit with the underlying computing storage engine, and contains the session information with the actual engine. The difference between it and Executor is that EngineConn only acts as a connection and a client, and does not actually perform calculations. For example, SparkEngineConn, its session information is SparkSession.
-- Executor: As a real computing storage scenario executor, it is the actual computing storage logic execution unit. It abstracts the various capabilities of EngineConn and provides multiple different architectural capabilities such as interactive execution, subscription execution, and responsive execution.
-
-The initialization of EngineConn microservices is generally divided into three stages:
-
-1. Initialize the EngineConn of the specific engine. First use the command line parameters of the Java main method to encapsulate an EngineCreationContext that contains relevant label information, startup information, and parameter information, and initialize EngineConn through EngineCreationContext to complete the establishment of the connection between EngineConn and the underlying Engine, such as: SparkEngineConn will initialize one at this stage SparkSession is used to establish a co [...]
-2. Initialize the Executor. After the EngineConn is initialized, the corresponding Executor will be initialized according to the actual usage scenario to provide service capabilities for subsequent users. For example, the SparkEngineConn in the interactive computing scenario will initialize a series of Executors that can be used to submit and execute SQL, PySpark, and Scala code capabilities, and support the Client to submit and execute SQL, PySpark, Scala and other codes to the SparkEng [...]
-3. Report the heartbeat to LinkisManager regularly, and wait for EngineConn to exit. When the underlying engine corresponding to EngineConn is abnormal, or exceeds the maximum idle time, or Executor is executed, or the user manually kills, the EngineConn automatically ends and exits.
-
-----
-
-At this point, The process of how to add a new EngineConn is basically over. Finally, let's make a summary:
-
-- The client initiates a request for adding EngineConn to LinkisManager.
-- LinkisManager checks the legitimacy of the parameters, first selects the appropriate ECM according to the label, then confirms the resources required for this new EngineConn according to the user's request, applies for resources from the RM module of LinkisManager, and requires ECM to start a new EngineConn as required after the application is passed.
-- ECM first requests EngineConnPluginServer to obtain an EngineConnLaunchRequest containing BML materials, environment variables, ECM required local environment variables, startup commands and other information needed to start an EngineConn, and then encapsulates the startup script of EngineConn, and finally executes the startup script to start the EngineConn.
-- EngineConn initializes the EngineConn of a specific engine, and then initializes the corresponding Executor according to the actual usage scenario, and provides service capabilities for subsequent users. Finally, report the heartbeat to LinkisManager regularly, and wait for the normal end or termination by the user.
-
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Job_submission_preparation_and_execution_process.md b/Linkis-Doc-master/en_US/Architecture_Documents/Job_submission_preparation_and_execution_process.md
deleted file mode 100644
index adb2628..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Job_submission_preparation_and_execution_process.md
+++ /dev/null
@@ -1,138 +0,0 @@
-# Job submission, preparation and execution process
-
-The submission and execution of computing tasks (Job) is the core capability provided by Linkis. It almost colludes with all modules in the Linkis computing governance architecture and occupies a core position in Linkis.
-
-The whole process, starting at submitting user's computing tasks from the client and ending with returning final results, is divided into three stages: submission -> preparation -> executing. The details are shown in the following figure.
-
-![The overall flow chart of computing tasks](../Images/Architecture/Job_submission_preparation_and_execution_process/overall.png)
-
-Among them:
-
-- Entrance, as the entrance to the submission stage, provides task reception, scheduling and job information forwarding capabilities. It is the unified entrance for all computing tasks. It will forward computing tasks to Orchestrator for scheduling and execution.
-- Orchestrator, as the entrance to the preparation phase, mainly provides job analysis, orchestration and execution capabilities.
-- Linkis Manager: The management center of computing governance capabilities. Its main responsibilities are as follow:
-
-  1. ResourceManager:Not only has the resource management capabilities of Yarn and Linkis EngineConnManager, but also provides tag-based multi-level resource allocation and recovery capabilities, allowing ResourceManager to have full resource management capabilities across clusters and across computing resource types;
-  2. AppManager:  Coordinate and manage all EngineConnManager and EngineConn, including the life cycle of EngineConn application, reuse, creation, switching, and destruction to AppManager for management;
-  3. LabelManager: Based on multi-level combined labels, it will provide label support for the routing and management capabilities of EngineConn and EngineConnManager across IDC and across clusters;
-  4. EngineConnPluginServer: Externally provides the resource generation capabilities required to start an EngineConn and EngineConn startup command generation capabilities.
-- EngineConnManager: It is the manager of EngineConn, which provides engine life-cycle management, and at the same time reports load information and its own health status to RM.
-- EngineConn: It is the actual connector between Linkis and the underlying computing storage engines. All user computing and storage tasks will eventually be submitted to the underlying computing storage engine by EngineConn. According to different user scenarios, EngineConn provides full-stack computing capability framework support for interactive computing, streaming computing, off-line computing, and data storage tasks.
-
-## 1. Submission Stage
-
-The submission phase is mainly the interaction of Client -> Linkis Gateway -> Entrance, and the process is as follows:
-
-![Flow chart of submission phase](../Images/Architecture/Job_submission_preparation_and_execution_process/submission.png)
-
-1. First, the Client (such as the front end or the client) initiates a Job request, and the job request information is simplified as follows (for the specific usage of Linkis, please refer to [How to use Linkis](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/en_US/User_Manual/How_To_Use_Linkis.md)):
-```
-POST /api/rest_j/v1/entrance/submit
-```
-
-```json
-{
-    "executionContent": {"code": "show tables", "runType": "sql"},
-    "params": {"variable": {}, "configuration": {}},  //非必须
-    "source": {"scriptPath": "file:///1.hql"}, //非必须,仅用于记录代码来源
-    "labels": {
-        "engineType": "spark-2.4.3",  //指定引擎
-        "userCreator": "johnnwnag-IDE"  // 指定提交用户和提交系统
-    }
-}
-```
-
-2. After Linkis-Gateway receives the request, according to the serviceName in the URI ``/api/rest_j/v1/${serviceName}/.+``, it will confirm the microservice name for routing and forwarding. Here Linkis-Gateway will parse out the  name as entrance and  Job is forwarded to the Entrance microservice. It should be noted that if the user specifies a routing label, the Entrance microservice instance with the corresponding label will be selected for forwarding according to the routing label ins [...]
-3. After Entrance receives the Job request, it will first simply verify the legitimacy of the request, then use RPC to call JobHistory to persist the job information, and then encapsulate the Job request as a computing task, put it in the scheduling queue, and wait for it to be consumed by consumption thread.
-4. The scheduling queue will open up a consumption queue and a consumption thread for each group. The consumption queue is used to store the user computing tasks that have been preliminarily encapsulated. The consumption thread will continue to take computing tasks from the consumption queue for consumption in a FIFO manner. The current default grouping method is Creator + User (that is, submission system + user). Therefore, even if it is the same user, as long as it is a computing task  [...]
-5. After the consuming thread takes out the calculation task, it will submit the calculation task to Orchestrator, which officially enters the preparation phase.
-
-## 2. Preparation Stage
-
-There are two main processes in the preparation phase. One is to apply for an available EngineConn from LinkisManager to submit and execute the following computing tasks. The other is Orchestrator to orchestrate the computing tasks submitted by Entrance, and to convert a user's computing request into a physical execution tree and handed over to the execution phase where a computing task actually being executed. 
-
-#### 2.1 Apply to LinkisManager for available EngineConn
-
-If the user has a reusable EngineConn in LinkisManager, the EngineConn is directly locked and returned to Orchestrator, and the entire application process ends.
-
-How to define a reusable EngineConn? It refers to those that can match all the label requirements of the computing task, and the EngineConn's own health status is Healthy (the load is low and the actual status is Idle). Then, all the EngineConn that meets the conditions are sorted and selected according to the rules, and finally the best one is locked.
-
-If the user does not have a reusable EngineConn, a process to request a new EngineConn will be triggered at this time. Regarding the process, please refer to: [How to add an EngineConn](How_to_add_an_EngineConn.md).
-
-#### 2.2 Orchestrate a computing task
-
-Orchestrator is mainly responsible for arranging a computing task (JobReq) into a physical execution tree (PhysicalTree) that can be actually executed, and providing the execution capabilities of the Physical tree.
-
-Here we first focus on Orchestrator's computing task scheduling capabilities. A flow chart is shown below:
-
-![Orchestration flow chart](../Images/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png)
-
-The main process is as follows:
-
-- Converter: Complete the conversion of the JobReq (task request) submitted by the user to Orchestrator's ASTJob. This step will perform parameter check and information supplementation on the calculation task submitted by the user, such as variable replacement, etc.
-- Parser: Complete the analysis of ASTJob. Split ASTJob into an AST tree composed of ASTJob and ASTStage.
-- Validator: Complete the inspection and information supplement of ASTJob and ASTStage, such as code inspection, necessary Label information supplement, etc.
-- Planner: Convert an AST tree into a Logical tree. The Logical tree at this time has been composed of LogicalTask, which contains all the execution logic of the entire computing task.
-- Optimizer: Convert a Logical tree to a Physica tree and optimize the Physical tree.
-
-In a physical tree, the majority of nodes are computing strategy logic. Only the middle ExecTask truly encapsulates the execution logic which will be further submitted to and executed at EngineConn. As shown below:
-
-![Physical Tree](../Images/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png)
-
-Different computing strategies have different execution logics encapsulated by JobExecTask and StageExecTask in the Physical tree.
-
-The execution logic encapsulated by JobExecTask and StageExecTask in the Physical tree depends on the  specific type of computing strategy.
-
-For example, under the multi-active computing strategy, for a computing task submitted by a user, the execution logic submitted to EngineConn of different clusters for execution is encapsulated in two ExecTasks, and the related strategy logic is reflected in the parent node (StageExecTask(End)) of the two ExecTasks.
-
-Here, we take the multi-reading scenario under the multi-active computing strategy as an example.
-
-In multi-reading scenario, only one result of ExecTask is required to return. Once the result is returned , the Physical tree can be marked as successful. However, the Physical tree only has the ability to execute sequentially according to dependencies, and cannot terminate the execution of each node. Once a node is canceled or fails to execute, the entire Physical tree will be marked as failure. At this time, StageExecTask (End) is needed to ensure that the Physical tree can not only ca [...]
-
-The orchestration process of Linkis Orchestrator is similar to many SQL parsing engines (such as Spark, Hive's SQL parser). But in fact, the orchestration capability of Linkis Orchestrator is realized based on the computing governance field for the different computing governance needs of users. The SQL parsing engine is a parsing orchestration oriented to the SQL language. Here is a simple distinction:
-
-1. What Linkis Orchestrator mainly wants to solve is the orchestration requirements caused by different computing tasks for computing strategies. For example, in order to be multi-active, Orchestrator will submit a calculation task for the user, based on the "multi-active" computing strategy requirements, compile a physical tree, so as to submit to multiple clusters to perform this calculation task. And in the process of constructing the entire Physical tree, various possible abnormal sc [...]
-2. The orchestration ability of Linkis Orchestrator has nothing to do with the programming language. In theory, as long as an engine has adapted to Linkis, all the programming languages it supports can be orchestrated, while the SQL parsing engine only cares about the analysis and execution of SQL, and is only responsible for parsing a piece of SQL into one executable Physical tree, and finally calculate the result.
-3. Linkis Orchestrator also has the ability to parse SQL, but SQL parsing is just one of Orchestrator Parser's analytic implementations for the SQL programming language. The Parser of Linkis Orchestrator also considers introducing Apache Calcite to parse SQL. It supports splitting a user SQL that spans multiple computing engines (must be a computing engine that Linkis has docked) into multiple sub SQLs and submitting them to each corresponding engine during the execution phase. Finally,  [...]
-
-Please refer to [Orchestrator Architecture Design](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/en_US/Architecture_Documents/Orchestrator/Orchestrator_architecture_doc.md) for more details. 
-
-After the analysis and arrangement of Linkis Orchestrator, the  computing task has been transformed into a executable physical tree. Orchestrator will submit the Physical tree to Orchestrator's Execution module and enter the final execution stage.
-
-## 3. Execution Stage
-
-The execution stage is mainly divided into the following two steps, these two steps are the last two phases of capabilities provided by Linkis Orchestrator:
-
-![Flow chart of the execution stage](../Images/Architecture/Job_submission_preparation_and_execution_process/execution.png)
-
-The main process is as follows:
-
-- Execution: Analyze the dependencies of the Physical tree, and execute them sequentially from the leaf nodes according to the dependencies.
-- Reheater: Once the execution of a node in the Physical tree is completed, it will trigger a reheat. Reheating allows the physical tree to be dynamically adjusted according to the real-time execution.For example: it is detected that a leaf node fails to execute, and it supports retry (if it is caused by throwing ReTryExecption), the Physical tree will be automatically adjusted, and a retry parent node with exactly the same content is added to the leaf node .
-
-Let us go back to the Execution stage, where we focus on the execution logic of the ExecTask node that encapsulates the user computing task submitted to EngineConn.
-
-1. As mentioned earlier, the first step in the preparation phase is to obtain a usable EngineConn from LinkisManager. After ExecTask gets this EngineConn, it will submit the user's computing task to EngineConn through an RPC request.
-2. After EngineConn receives the computing task, it will asynchronously submit it to the underlying computing storage engine through the thread pool, and then immediately return an execution ID.
-3. After ExecTask gets this execution ID, it can then use the this ID to asynchronously pull the execution status of the computing task (such as: status, progress, log, result set, etc.).
-4. At the same time, EngineConn will monitor the execution of the underlying computing storage engine in real time through multiple registered Listeners. If the computing storage engine does not support registering Listeners, EngineConn will start a daemon thread for the computing task and periodically pull the execution status from the computing storage engine.
-5. EngineConn will pull the execution status back to the microservice where Orchestrator is located in real time through RCP request.
-6. After the Receiver of the microservice receives the execution status, it will broadcast it through the ListenerBus, and the Orchestrator Execution will consume the event and dynamically update the execution status of the Physical tree.
-7. The result set generated by the calculation task will be written to storage media such as HDFS at the EngineConn side. EngineConn returns only the result set path through RPC, Execution consumes the event, and broadcasts the obtained result set path through ListenerBus, so that the Listener registered by Entrance with Orchestrator can consume the result set path and write the result set path Persist to JobHistory.
-8. After the execution of the computing task on the EngineConn side is completed, through the same logic, the Execution will be triggered to update the state of the ExecTask node of the Physical tree, so that the Physical tree will continue to execute until the entire tree is completely executed. At this time, Execution will broadcast the completion status of the calculation task through ListenerBus.
-9. After the Entrance registered Listener with the Orchestrator consumes the state event, it updates the job state to JobHistory, and the entire task execution is completed.
-
-----
-
-Finally, let's take a look at how the client side knows the state of the calculation task and obtains the calculation result in time, as shown in the following figure:
-
-![Results acquisition process](../Images/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png)
-
-The specific process is as follows:
-
-1. The client periodically polls to request Entrance to obtain the status of the computing task.
-2. Once the status is flipped to success, it sends a request for job information to JobHistory, and gets all the result set paths.
-3. Initiate a query file content request to PublicService through the result set path, and obtain the content of the result set.
-
-Since then, the entire process of  job submission -> preparation -> execution have been completed.
-
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/Gateway.md b/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/Gateway.md
deleted file mode 100644
index 02c1db2..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/Gateway.md
+++ /dev/null
@@ -1,34 +0,0 @@
-## Gateway Architecture Design
-
-#### Brief
-The Gateway is the primary entry point for Linkis to accept client and external requests, such as receiving job execution requests, and then forwarding the execution requests to specific eligible Entrance services.
-The bottom layer of the entire architecture is implemented based on "SpringCloudGateway". The upper layer is superimposed with module designs related to Http request parsing, session permissions, label routing and WebSocket multiplex forwarding. The overall architecture can be seen as follows.
-### Architecture Diagram
-
-![Gateway diagram of overall architecture](../../Images/Architecture/Gateway/gateway_server_global.png)
-
-#### Architecture Introduction
-- gateway-core: Gateway's core interface definition module, mainly defines the "GatewayParser" and "GatewayRouter" interfaces, corresponding to request parsing and routing according to the request; at the same time, it also provides the permission verification tool class named "SecurityFilter".
-- spring-cloud-gateway: This module integrates all dependencies related to "SpringCloudGateway", process and forward requests of the HTTP and WebSocket protocol types respectively.
-- gateway-server-support: The driver module of Gateway, relies on the spring-cloud-gateway module to implement "GatewayParser" and "GatewayRouter" respectively, among which "DefaultLabelGatewayRouter" provides the function of label routing.
-- gateway-httpclient-support: Provides a client-side generic class for Http to access Gateway services, which can be implemented based on more.
-- instance-label: External instance label module, providing service interface named "InsLabelService" which used to create routing labels and associate with application instances.
-
-The detailed design involved is as follows:
-
-#### 1、Request Routing And Forwarding (With Label Information)
-First, after the dispatcher of "SpringCloudGateway", the request enters the filter list of the gateway, and then enters the two main logic of "GatewayAuthorizationFilter" and "SpringCloudGatewayWebsocketFilter". 
-The filter integrates "DefaultGatewayParser" and "DefaultGatewayRouter" classes. From Parser to Router, execute the corresponding parse and route methods. 
-"DefaultGatewayParser" and "DefaultGatewayRouter" classes also contain custom Parser and Router, which are executed in the order of priority.
-Finally, the service instance selected by the "DefaultGatewayRouter" is handed over to the upper layer for forwarding.
-Now, we take the job execution request forwarding with label information as an example, and draw the following flowchart:  
-![Gateway Request Routing](../../Images/Architecture/Gateway/gateway_server_dispatcher.png)
-
-
-#### 2、WebSocket Connection Forwarding Management
-By default, "Spring Cloud Gateway" only routes and forwards WebSocket request once, and cannot perform dynamic switching. 
-But under the Linkis's gateway architecture, each information interaction will be accompanied by a corresponding uri address to guide routing to different backend services.
-In addition to the "WebSocketService" which is responsible for connecting with the front-end and the client, 
-and the "WebSocketClient" which is responsible for connecting with the backend service, a series of "GatewayWebSocketSessionConnection" lists are cached in the middle.
-A "GatewayWebSocketSessionConnection" represents the connection between a session and multiple backend service instances.  
-![Gateway WebSocket Forwarding](../../Images/Architecture/Gateway/gatway_websocket.png)
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/README.md
deleted file mode 100644
index 9dc4f83..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/README.md
+++ /dev/null
@@ -1,32 +0,0 @@
-## **Background**
-
-Microservice governance includes three main microservices: Gateway, Eureka and Open Feign.
-It is used to solve Linkis's service discovery and registration, unified gateway, request forwarding, inter-service communication, load balancing and other issues. 
-At the same time, Linkis 1.0 will also provide the supporting for Nacos; the entire Linkis is a complete microservice architecture and each business progress requires multiple microservices to complete.
-
-## **Architecture diagram**
-
-![](../../Images/Architecture/linkis-microservice-gov-01.png)
-
-## **Architecture Introduction**
-
-1. Linkis Gateway  
-As the gateway entrance of Linkis, Linkis Gateway is mainly responsible for request forwarding, user access authentication and WebSocket communication. 
-The Gateway of Linkis 1.0 also added Label-based routing and forwarding capabilities. 
-A WebSocket routing and forwarder is implemented by Spring Cloud Gateway in Linkis, it is used to establish a WebSocket connection with the client.
-After the connection is established, it will automatically analyze the client's WebSocket request and determine which backend microservice the request should be forward to through the rules, 
-then the request is forwarded to the corresponding backend microservice instance.  
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[Linkis Gateway](Gateway.md)
-
-2. Linkis Eureka  
-Mainly responsible for service registration and discovery. Eureka consists of multiple instances(service instances). These service instances can be divided into two types: Eureka Server and Eureka Client. 
-For ease of understanding, we divide Eureka Client into Service Provider and Service Consumer. Eureka Server provides service registration and discovery. 
-The Service Provider registers its own service with Eureka, so that service consumers can find it.
-The Service Consumer obtains a listed of registered services from Eureka, so that they can consume services.
-
-3. Linkis has implemented a set of its own underlying RPC communication schema based on Feign. As the underlying communication solution, Linkis RPC integrates the SDK into the microservices in need. 
-A microservice can be both the request caller and the request receiver.
-As the request caller, the Receiver of the target microservice will be requested through the Sender.
-As the request receiver, the Receiver will be provided to process the request sent by the Sender in order to complete the synchronous response or asynchronous response.
-   
-   ![](../../Images/Architecture/linkis-microservice-gov-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/BML.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/BML.md
deleted file mode 100644
index 69e671d..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/BML.md
+++ /dev/null
@@ -1,93 +0,0 @@
-## Background
-
-BML (Material Library Service) is a material management system of linkis, which is mainly used to store various file data of users, including user scripts, resource files, third-party Jar packages, etc., and can also store class libraries that need to be used when the engine is running.
-
-It has the following functions:
-
-1) Support various types of files. Supports text and binary files. If you are a user in the field of big data, you can store their script files and material compression packages in the system.
-
-2), the service is stateless, multi-instance deployment, to achieve high service availability. When the system is deployed, it can be deployed with multiple instances. Each instance provides services independently to the outside world without interfering with each other. All information is stored in the database for sharing.
-
-3) Various ways of use. Provides two ways of Rest interface and SDK, users can choose according to their needs.
-
-4) The file is appended to avoid too many small HDFS files. Many small HDFS files will lead to a decrease in the overall performance of HDFS. We have adopted a file append method to combine multiple versions of resource files into one large file, effectively reducing the number of files in HDFS.
-
-5) Accurate authority control, safe storage of user resource file content. Resource files often have important content, and users only want to read it by themselves
-
-6) Provide life cycle management of file upload, update, download and other operational tasks.
-
-## Architecture diagram
-
-![BML Architecture Diagram](../../Images/Architecture/bml-02.png)
-
-## Schema description
-
-1. The Service layer includes resource management, uploading resources, downloading resources, sharing resources, and project resource management.
-
-Resource management is responsible for basic operations such as adding, deleting, modifying, and checking resources, controlling access rights, and whether files are out of date.
-
-2. File version control
-   Each BML resource file has version information. Each update operation of the same resource will generate a new version. Of course, it also supports historical version query and download operations. BML uses the version information table to record the deviation position and size of each version of the resource file HDFS storage, and can store multiple versions of data on one HDFS file.
-
-3. Resource file storage
-   HDFS files are mainly used as actual data storage. HDFS files can effectively ensure that the material library files are not lost. The files are appended to avoid too many small HDFS files.
-
-### Core Process
-
-**upload files:**
-
-1. Determine the operation type of the file uploaded by the user, whether it is the first upload or update upload. If it is the first upload, a new resource information record needs to be added. The system has generated a globally uniquely identified resource_id and a resource_location for this resource. The first version A1 of resource A needs to be stored in the resource_location location in the HDFS file system. After storing, you can get the first version marked as V00001. If it is a [...]
-
-2. Upload the file stream to the specified HDFS file. If it is an update, it will be added to the end of the last content by file appending.
-
-3. Add a new version record, each upload will generate a new version record. In addition to recording the metadata information of this version, the most important thing is to record the storage location of the version of the file, including the file path, start location, and end location.
-
-**download file:**
-
-1. When users download resources, they need to specify two parameters: one is resource_id and the other is version. If version is not specified, the latest version will be downloaded by default.
-
-2. After the user passes in the two parameters resource_id and version to the system, the system queries the resource_version table, finds the corresponding resource_location, start_byte and end\_byte to download, and uses the skipByte method of stream processing to set the front (start_byte- 1) skip bytes, then read to end_byte
-   The number of bytes. After the reading is successful, the stream information is returned to the user.
-
-3. Insert a successful download record in resource_download_history
-
-## Database Design
-
-1. Resource information table (resource)
-
-| Field name | Function | Remarks |
-|-------------------|------------------------------|----------------------------------|
-| resource_id | A string that uniquely identifies a resource globally | UUID can be used for identification |
-| resource_location | The location where resources are stored | For example, hdfs:///tmp/bdp/\${USERNAME}/ |
-| owner | The owner of the resource | e.g. zhangsan |
-| create_time | Record creation time | |
-| is_share | Whether to share | 0 means not to share, 1 means to share |
-| update\_time | Last update time of the resource | |
-| is\_expire | Whether the record resource expires | |
-| expire_time | Record resource expiration time | |
-
-2. Resource version information table (resource_version)
-
-| Field name | Function | Remarks |
-|-------------------|--------------------|----------|
-| resource_id | Uniquely identifies the resource | Joint primary key |
-| version | The version of the resource file | |
-| start_byte | Start byte count of resource file | |
-| end\_byte | End bytes of resource file | |
-| size | Resource file size | |
-| resource_location | Resource file placement location | |
-| start_time | Record upload start time | |
-| end\_time | End time of record upload | |
-| updater | Record update user | |
-
-3. Resource download history table (resource_download_history)
-
-| Field | Function | Remarks |
-|-------------|---------------------------|--------------------------------|
-| resource_id | Record the resource_id of the downloaded resource | |
-| version | Record the version of the downloaded resource | |
-| downloader | Record downloaded users | |
-| start\_time | Record download time | |
-| end\_time | Record end time | |
-| status | Whether the record is successful | 0 means success, 1 means failure |
-| err\_msg | Log failure reason | null means success, otherwise log failure reason |
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md
deleted file mode 100644
index 71d83d3..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md
+++ /dev/null
@@ -1,95 +0,0 @@
-## **CSCache Architecture**
-### **issues that need resolving**
-
-### 1.1. Memory structure needs to be solved:
-
-1. Support splitting by ContextType: speed up storage and query performance
-
-2. Support splitting according to different ContextID: Need to complete ContextID, see metadata isolation
-
-3. Support LRU: Recycle according to specific algorithm
-
-4. Support searching by keywords: Support indexing by keywords
-
-5. Support indexing: support indexing directly through ContextKey
-
-6. Support traversal: need to support traversal according to ContextID and ContextType
-
-### 1.2 Loading and parsing problems to be solved:
-
-1. Support parsing ContextValue into memory data structure: It is necessary to complete the parsing of ContextKey and value to find the corresponding keywords.
-
-2. Need to interface with the Persistence module to complete the loading and analysis of the ContextID content
-
-### 1.3 Metric and cleaning mechanism need to solve the problem:
-
-1. When JVM memory is not enough, it can be cleaned based on memory usage and frequency of use
-
-2. Support statistics on the memory usage of each ContextID
-
-3. Support statistics on the frequency of use of each ContextID
-
-## **ContextCache Architecture**
-
-The architecture of ContextCache is shown in the following figure:
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png)
-
-1. ContextService: complete the provision of external interfaces, including additions, deletions, and changes;
-
-2. Cache: complete the storage of context information, map storage through ContextKey and ContextValue
-
-3. Index: The established keyword index, which stores the mapping between the keywords of the context information and the ContextKey;
-
-4. Parser: complete the keyword analysis of the context information;
-
-5. LoadModule completes the loading of information from the persistence layer when the ContextCache does not have the corresponding ContextID information;
-
-6. AutoClear: When the Jvm memory is insufficient, complete the on-demand cleaning of ContextCache;
-
-7. Listener: Metric information for the mobile phone ContextCache, such as memory usage and access times.
-
-## **ContextCache storage structure design**
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png)
-
-The storage structure of ContextCache is divided into three layers:
-
-**ContextCache:** stores the mapping relationship between ContextID and ContextIDValue, and can complete the recovery of ContextID according to the LRU algorithm;
-
-**ContextIDValue:** CSKeyValueContext that has stored all context information and indexes of ContextID. And count the memory and usage records of ContestID.
-
-**CSKeyValueContext:** Contains the CSInvertedIndexSet index set that stores and supports keywords according to type, and also contains the storage set CSKeyValueMapSet that stores ContextKey and ContextValue.
-
-CSInvertedIndexSet: categorize and store keyword indexes through CSType
-
-CSKeyValueMapSet: categorize and store context information through CSType
-
-## **ContextCache UML Class Diagram Design**
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png)
-
-## **ContextCache Timing Diagram**
-
-The following figure draws the overall process of using ContextID, KeyWord, and ContextType to check the corresponding ContextKeyValue in ContextCache.
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png)
-
-Note: The ContextIDValueGenerator will go to the persistence layer to pull the Array[ContextKeyValue] of the ContextID, and parse the ContextKeyValue key storage index and content through ContextKeyValueParser.
-
-The other interface processes provided by ContextCacheService are similar, so I won't repeat them here.
-
-## **KeyWord parsing logic**
-
-The specific entity bean of ContextValue needs to use the annotation \@keywordMethod on the corresponding get method that can be used as the keyword. For example, the getTableName method of Table must be annotated with \@keywordMethod.
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png)
-
-When ContextKeyValueParser parses ContextKeyValue, it scans all the annotations modified by KeywordMethod of the specific object passed in and calls the get method to obtain the returned object toString, which will be parsed through user-selectable rules and stored in the keyword collection. Rules have separators, and regular expressions
-
-Precautions:
-
-1. The annotation will be defined to the core module of cs
-
-2. The modified Get method cannot take parameters
-
-3. The toSting method of the return object of the Get method must return the keyword
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md
deleted file mode 100644
index 058f9ba..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md
+++ /dev/null
@@ -1,61 +0,0 @@
-## **CSClient design ideas and implementation**
-
-
-CSClient is a client that interacts with each microservice and CSServer group. CSClient needs to meet the following functions.
-
-1. The ability of microservices to apply for a context object from cs-server
-
-2. The ability of microservices to register context information with cs-server
-
-3. The ability of microservices to update context information to cs-server
-
-4. The ability of microservices to obtain context information from cs-server
-
-5. Certain special microservices can sniff operations that have modified context information in cs-server
-
-6. CSClient can give clear instructions when the csserver cluster fails
-
-7. CSClient needs to provide a copy of all the context information of csid1 as a new csid2 for scheduling execution
-
-> The overall approach is to send http requests through the linkis-httpclient that comes with linkis, and send requests and receive responses by implementing various Action and Result entity classes.
-
-### 1. The ability to apply for context objects
-
-To apply for a context object, for example, if a user creates a new workflow on the front end, dss-server needs to apply for a context object from dss-server. When applying for a context object, the identification information (project name, workflow name) of the workflow needs to be passed through CSClient Send it to the CSServer (the gateway should be sent to one randomly at this time, because no csid information is carried at this time), once the application context returns the correct [...]
-
-### 2. Ability to register contextual information
-
-> The ability to register context, for example, the user uploads a resource file on the front-end page, uploads the file content to dss-server, dss-server stores the content in bml, and then needs to register the resourceid and version obtained from bml to cs-server In this case, you need to use the ability of csclient to register. The ability to register is to pass in csid and cskey
-> Register with csvalue (resourceid and version).
-
-### 3. Ability to update registered context
-
-> The ability to update contextual information. For example, if a user uploads a resource file test.jar, csserver already has registered information. If the user updates the resource file when editing the workflow, then cs-server needs to update this content Update. At this time, you need to call the updated interface of csclient
-
-### 4. The ability to get context
-
-The context information registered to csserver needs to be read when variable replacement, resource file download, and downstream nodes call upstream nodes to generate information. For example, when the engine side executes code, it needs to download bml resources. When you need to interact with csclient and csserver, get the resourceid and version of the file in bml and then download it.
-
-### 5. Certain special microservices can sniff operations that have modified context information in cs-server
-
-This operation is based on the following example. For example, a widget node has a strong linkage with the upstream sql node. The user writes a sql in the sql node, and the metadata of the sql result set is a, b, and c. Field, the widget node behind is bound to this sql, you can edit these three fields on the page, and then the user changes the sql statement, the metadata becomes a, b, c, d four fields, this When the user needs to refresh manually. We hope that if the script is changed,  [...]
-
-### 6. CSClient needs to provide a copy of all context information of csid1 as a new csid2 for scheduling execution
-
-Once the user publishes a project, he hopes to tag all the information of the project similar to git. The resource files and custom variables here will not change anymore, but there are some dynamic information, such as the result set generated. The content of csid will still be updated. So csclient needs to provide an interface for csid1 to copy all context information for microservices to call
-
-## **Implementation of ClientListener Module**
-
-For a client, sometimes you want to know that a certain csid and cskey have changed in the cs-server as soon as possible. For example, the csclient of visualis needs to be able to know that the previous sql node has changed, then it needs to be notified , The server has a listener module, and the client also needs a listener module. For example, a client wants to be able to monitor the changes of a certain cskey of a certain csid, then he needs to register the cskey to the callbackEngine [...]
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png)
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png)
-
-## **Implementation of GatewayRouter**
-
-
-The Gateway plug-in implements Context forwarding. The forwarding logic of the Gateway plug-in is carried out through the GatewayRouter. It needs to be divided into two ways. The first is to apply for a context object. At this time, the information carried by the CSClient does not contain csid. For the information, the judgment logic at this time should be through the registration information of eureka, and the first request sent will randomly enter a microservice instance.
-The second case is that the content of the ContextID is carried. We need to parse the csid. The way of parsing is to obtain the information of each instance through the method of string cutting, and then use eureka to determine whether this micro-channel still exists through the instance information. Service, if it exists, send it to this microservice instance
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md
deleted file mode 100644
index 76c85c3..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md
+++ /dev/null
@@ -1,86 +0,0 @@
-## **CS HA Architecture Design**
-
-### 1, CS HA architecture summary
-
-#### (1) CS HA architecture diagram
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png)
-
-#### (2) Problems to be solved
-
--HA of Context instance object
-
--Client generates CSID request when creating workflow
-
--List of aliases of CS Server
-
--Unified CSID generation and analysis rules
-
-#### (3) Main design ideas
-
-① Load balancing
-
-When the client creates a new workflow, it randomly requests the HA module of a certain server to generate a new HAID with equal probability. The HAID information includes the main server information (hereinafter referred to as the main instance), and the candidate instance, where the candidate instance is The instance with the lowest load among the remaining servers, and a corresponding ContextID. The generated HAID is bound to the workflow and is persisted to the database, and then all [...]
-
-②High availability
-
-In subsequent operations, when the client or gateway determines that the main instance is unavailable, the operation request is forwarded to the standby instance for processing, thereby achieving high service availability. The HA module of the standby instance will first verify the validity of the request based on the HAID information.
-
-③Alias ​​mechanism
-
-The alias mechanism is adopted for the machine, the Instance information contained in the HAID adopts a custom alias, and the alias mapping queue is maintained in the background. It is that the client uses HAID when interacting with other components in the background, and uses ContextID when interacting with other components in the background. When implementing specific operations, a dynamic proxy mechanism is used to convert HAID to ContextID for processing.
-
-### 2, module design
-
-#### (1) Module diagram
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png)
-
-#### (2) Specific modules
-
-①ContextHAManager module
-
-Provide interface for CS Server to call to generate CSID and HAID, and provide alias conversion interface based on dynamic proxy;
-
-Call the persistence module interface to persist CSID information;
-
-②AbstractContextHAManager module
-
-The abstraction of ContextHAManager can support the realization of multiple ContextHAManager;
-
-③InstanceAliasManager module
-
-RPC module provides Instance and alias conversion interface, maintains alias mapping queue, and provides alias and CS
-Server instance query; provide an interface to verify whether the host is valid;
-
-④HAContextIDGenerator module
-
-Generate a new HAID and encapsulate it into the client's agreed format and return it to the client. The HAID structure is as follows:
-
-\${length of first instance}\${length of second instance}{instance alias 1} {instance alias 2} {actual ID}, the actual ID is set as ContextID
-Key;
-
-⑤ContextHAChecker module
-
-Provide HAID verification interface. Each request received will verify whether the ID format is valid, and whether the current host is the primary instance or the secondary instance: if it is the primary instance, the verification is passed; if it is the secondary instance, verify whether the primary instance is invalid and the primary instance is invalid The verification is passed.
-
-⑥BackupInstanceGenerator module
-
-Generate a backup instance and attach it to the CSID information;
-
-⑦MultiTenantBackupInstanceGenerator interface
-
-(Reserved interface, not implemented yet)
-
-### 3. UML Class Diagram
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png)
-
-### 4. HA module operation sequence diagram
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png)
-
-CSID generated for the first time:
-The client sends a request, and the Gateway forwards it to any server. The HA module generates the HAID, including the main instance, the backup instance and the CSID, and completes the binding of the workflow and the HAID.
-
-When the client sends a change request, Gateway determines that the main Instance is invalid, and then forwards the request to the standby Instance for processing. After the instance on the standby Instance verifies that the HAID is valid, it loads the instance and processes the request.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md
deleted file mode 100644
index 933d384..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md
+++ /dev/null
@@ -1,33 +0,0 @@
-## **Listener Architecture**
-
-In DSS, when a node changes its metadata information, the context information of the entire workflow changes. We expect all nodes to perceive the change and automatically update the metadata. We use the monitoring mode to achieve, and use the heartbeat mechanism to poll to maintain the metadata consistency of the context information.
-
-### **Client registration itself, CSKey registration and CSKey update process**
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png)
-
-The main process is as follows:
-
-1. Registration operation: The clients client1, client2, client3, and client4 register themselves and the CSKey they want to monitor with the csserver through HTPP requests. The Service service obtains the callback engine instance through the external interface, and registers the client and its corresponding CSKeys.
-
-2. Update operation: If the ClientX node updates the CSKey content, the Service service updates the CSKey cached by the ContextCache, and the ContextCache delivers the update operation to the ListenerBus. The ListenerBus notifies the specific listener to consume (that is, the ContextKeyCallbackEngine updates the CSKeys corresponding to the Client). The consumed event will be automatically removed.
-
-3. Heartbeat mechanism:
-
-All clients use heartbeat information to detect whether the value of CSKeys in ContextKeyCallbackEngine has changed.
-
-ContextKeyCallbackEngine returns the updated CSKeys value to all registered clients through the heartbeat mechanism. If there is a client's heartbeat timeout, remove the client.
-
-### **Listener UM class diagram**
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png)
-
-Interface: ListenerManager
-
-External: Provide ListenerBus for event delivery.
-
-Internally: provide a callback engine for specific event registration, access, update, and heartbeat processing logic
-
-## **Listener callbackengine timing diagram**
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md
deleted file mode 100644
index b57c8c7..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md
+++ /dev/null
@@ -1,8 +0,0 @@
-## **CSPersistence Architecture**
-
-### Persistence UML diagram
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png)
-
-
-The Persistence module mainly defines ContextService persistence related operations. The entities mainly include CSID, ContextKeyValue, CSResource, and CSTable.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md
deleted file mode 100644
index 8dea6f2..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md
+++ /dev/null
@@ -1,127 +0,0 @@
-## **CSSearch Architecture**
-### **Overall architecture**
-
-As shown below:
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png)
-
-1. ContextSearch: The query entry, accepts the query conditions defined in the Map form, and returns the corresponding results according to the conditions.
-
-2. Building module: Each condition type corresponds to a Parser, which is responsible for converting the condition in the form of Map into a Condition object, which is implemented by calling the logic of ConditionBuilder. Conditions with complex logical relationships will use ConditionOptimizer to optimize query plans based on cost-based algorithms.
-
-3. Execution module: Filter out the results that match the conditions from the Cache. According to different query targets, there are three execution modes: Ruler, Fetcher and Match. The specific logic is described later.
-
-4. Evaluation module: Responsible for calculation of conditional execution cost and statistics of historical execution status.
-
-### **Query Condition Definition (ContextSearchCondition)**
-
-A query condition specifies how to filter out the part that meets the condition from a ContextKeyValue collection. The query conditions can be used to form more complex query conditions through logical operations.
-
-1. Support ContextType, ContextScope, KeyWord matching
-
-    1. Corresponding to a Condition type
-
-    2. In Cache, these should have corresponding indexes
-
-2. Support contains/regex matching mode for key
-
-    1. ContainsContextSearchCondition: contains a string
-
-    2. RegexContextSearchCondition: match a regular expression
-
-3. Support logical operations of or, and and not
-
-    1. Unary operation UnaryContextSearchCondition:
-
-> Support logical operations of a single parameter, such as NotContextSearchCondition
-
-1. Binary operation BinaryContextSearchCondition:
-
-> Support the logical operation of two parameters, defined as LeftCondition and RightCondition, such as OrContextSearchCondition and AndContextSearchCondition
-
-1. Each logical operation corresponds to an implementation class of the above subclass
-
-2. The UML class diagram of this part is as follows:
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png)
-
-### **Construction of query conditions**
-
-1. Support construction through ContextSearchConditionBuilder: When constructing, if multiple ContextType, ContextScope, KeyWord, contains/regex matches are declared at the same time, they will be automatically connected by And logical operation
-
-2. Support logical operations between Conditions and return new Conditions: And, Or and Not (considering the form of condition1.or(condition2), the top-level interface of Condition is required to define logical operation methods)
-
-3. Support to build from Map through ContextSearchParser corresponding to each underlying implementation class
-
-### **Execution of query conditions**
-
-1. Three function modes of query conditions:
-
-    1. Ruler: Filter out eligible ContextKeyValue sub-Arrays from an Array
-
-    2. Matcher: Determine whether a single ContextKeyValue meets the conditions
-
-    3. Fetcher: Filter out an Array of eligible ContextKeyValue from ContextCache
-
-2. Each bottom-level Condition has a corresponding Execution, responsible for maintaining the corresponding Ruler, Matcher, and Fetcher.
-
-### **Query entry ContextSearch**
-
-Provide a search interface, receive Map as a parameter, and filter out the corresponding data from the Cache.
-
-1. Use Parser to convert the condition in the form of Map into a Condition object
-
-2. Obtain cost information through Optimizer, and determine the order of query according to the cost information
-
-3. After executing the corresponding Ruler/Fetcher/Matcher logic through the corresponding Execution, the search result is obtained
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png)
-
-### **Query Optimization**
-
-1. OptimizedContextSearchCondition maintains the Cost and Statistics information of the condition:
-
-    1. Cost information: CostCalculator is responsible for judging whether a certain Condition can calculate Cost, and if it can be calculated, it returns the corresponding Cost object
-
-    2. Statistics information: start/end/execution time, number of input lines, number of output lines
-
-2. Implement a CostContextSearchOptimizer, whose optimize method is based on the cost of the Condition to optimize the Condition and convert it into an OptimizedContextSearchCondition object. The specific logic is described as follows:
-
-    1. Disassemble a complex Condition into a tree structure based on the combination of logical operations. Each leaf node is a basic simple Condition; each non-leaf node is a logical operation.
-
-> Tree A as shown in the figure below is a complex condition composed of five simple conditions of ABCDE through various logical operations.
-
-![](./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png)
-<center>(Tree A)</center>
-
-1. The execution of these Conditions is actually depth first, traversing the tree from left to right. Moreover, according to the exchange rules of logical operations, the left and right order of the child nodes of a node in the Condition tree can be exchanged, so all possible trees in all possible execution orders can be enumerated.
-
-> Tree B as shown in the figure below is another possible sequence of tree A above, which is exactly the same as the execution result of tree A, except that the execution order of each part has been adjusted.
-
-![](./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png)
-<center>(Tree B)</center>
-
-1. For each tree, the cost is calculated from the leaf node and collected to the root node, which is the final cost of the tree, and finally the tree with the smallest cost is obtained as the optimal execution order.
-
-> The rules for calculating node cost are as follows:
-
-1. For leaf nodes, each node has two attributes: Cost and Weight. Cost is the cost calculated by CostCalculator. Weight is assigned according to the order of execution of the nodes. The current default is 1 on the left and 0.5 on the right. See how to adjust it later (the reason for assigning weight is that the conditions on the left have already been set in some cases. It can directly determine whether the entire combinatorial logic matches or not, so the condition on the right does not [...]
-
-2. For non-leaf nodes, Cost = the sum of Cost×Weight of all child nodes; the weight assignment logic is consistent with that of leaf nodes.
-
-> Taking tree A and tree B as examples, calculate the costs of these two trees respectively, as shown in the figure below, the number in the node is Cost\|Weight, assuming that the cost of the 5 simple conditions of ABCDE is 10, 100, 50 , 10, and 100. It can be concluded that the cost of tree B is less than that of tree A, which is a better solution.
-
-
-<center class="half">
-    <img src="./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png" width="300"> <img src="./../ ../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png" width="300">
-</center>
-
-1. Use CostCalculator to measure the cost of simple conditions:
-
-    1. The condition acting on the index: the cost is determined according to the distribution of the index value. For example, when the length of the Array obtained by condition A from the Cache is 100 and condition B is 200, then the cost of condition A is less than B.
-
-    2. Conditions that need to be traversed:
-
-        1. According to the matching mode of the condition itself, an initial Cost is given: For example, Regex is 100, Contains is 10, etc. (the specific values ​​etc. will be adjusted according to the situation when they are realized)
-
-        2. According to the efficiency of historical query, the real-time Cost is obtained after continuous adjustment on the basis of the initial Cost. Throughput per unit time
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md
deleted file mode 100644
index 05c6168..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md
+++ /dev/null
@@ -1,53 +0,0 @@
-## **ContextService Architecture**
-
-### **Horizontal Division**
-
-Horizontally divided into three modules: Restful, Scheduler, Service
-
-#### Restful Responsibilities:
-
-    Encapsulate the request as httpjob and submit it to the Scheduler
-
-#### Scheduler Responsibilities:
-
-    Find the corresponding service through the ServiceName of the httpjob protocol to execute the job
-
-#### Service Responsibilities:
-
-    The module that actually executes the request logic, encapsulates the ResponseProtocol, and wakes up the wait thread in Restful
-
-### **Vertical Division**
-Vertically divided into 4 modules: Listener, History, ContextId, Context:
-
-#### Listener responsibilities:
-
-1. Responsible for the registration and binding of the client side (write to the database and register in the CallbackEngine)
-
-2. Heartbeat interface, return Array[ListenerCallback] through CallbackEngine
-
-#### History Responsibilities:
-Create and remove history, operate Persistence for DB persistence
-
-#### ContextId Responsibilities:
-Mainly docking with Persistence for ContextId creation, update and removal, etc.
-
-#### Context responsibility:
-
-1. For removal, reset and other methods, first operate Persistence for DB persistence, and update ContextCache
-
-2. Encapsulate the query condition and go to the ContextSearch module to obtain the corresponding ContextKeyValue data
-
-The steps for requesting access are as follows:
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png)
-
-## **UML Class Diagram**
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png)
-
-## **Scheduler thread model**
-
-Need to ensure that Restful's thread pool is not filled
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png)
-
-The sequence diagram is as follows:
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md
deleted file mode 100644
index c6af94c..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md
+++ /dev/null
@@ -1,123 +0,0 @@
-## **Background**
-
-### **What is Context**
-
-All necessary information to keep a certain operation going on. For example: reading three books at the same time, the page number of each book has been turned is the context of continuing to read the book.
-
-### **Why do you need CS (Context Service)?**
-
-CS is used to solve the problem of data and information sharing across multiple systems in a data application development process.
-
-For example, system B needs to use a piece of data generated by system A. The usual practice is as follows:
-
-1. B system calls the data access interface developed by A system;
-
-2. System B reads the data written by system A into a shared storage.
-
-With CS, the A and B systems only need to interact with the CS, write the data and information that need to be shared into the CS, and read the data and information that need to be read from the CS, without the need for an external system to develop and adapt. , Which greatly reduces the call complexity and coupling of information sharing between systems, and makes the boundaries of each system clearer.
-
-## **Product Range**
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png)
-
-
-### Metadata context
-
-The metadata context defines the metadata specification.
-
-Metadata context relies on data middleware, and its main functions are as follows:
-
-1. Open up the relationship with the data middleware, and get all user metadata information (including Hive table metadata, online database table metadata, and other NOSQL metadata such as HBase, Kafka, etc.)
-
-2. When all nodes need to access metadata, including existing metadata and metadata in the application template, they must go through the metadata context. The metadata context records all metadata information used by the application template.
-
-3. The new metadata generated by each node must be registered with the metadata context.
-
-4. When the application template is extracted, the metadata context is abstracted for the application template (mainly, the multiple library tables used are made into \${db}. tables to avoid data permission problems) and all dependent metadata information is packaged.
-
-Metadata context is the basis of interactive workflows and the basis of application templates. Imagine: When Widget is defined, how to know the dimensions of each indicator defined by DataWrangler? How does Qualitis verify the graph report generated by Widget?
-
-### Data context
-
-The data context defines the data specification.
-
-The data context depends on data middleware and Linkis computing middleware. The main functions are as follows:
-
-1. Get through the data middleware and get all user data information.
-
-2. Get through the computing middleware and get the data storage information of all nodes.
-
-3. When all nodes need to write temporary results, they must pass through the data context and be uniformly allocated by the data context.
-
-4. When all nodes need to access data, they must pass the data context.
-
-5. The data context distinguishes between dependent data and generated data. When the application template is extracted, all dependent data is abstracted and packaged for the application template.
-
-### Resource context
-
-The resource context defines the resource specification.
-
-The resource context mainly interacts with Linkis computing middleware. The main functions are as follows:
-
-1. User resource files (such as Jar, Zip files, properties files, etc.)
-
-2. User UDF
-
-3. User algorithm package
-
-4. User script
-
-### Environmental context
-
-The environmental context defines the environmental specification.
-
-The main functions are as follows:
-
-1. Operating System
-
-2. Software, such as Hadoop, Spark, etc.
-
-3. Package dependencies, such as Mysql-JDBC.
-
-### Object context
-
-The runtime context is all the context information retained when the application template (workflow) is defined and executed.
-
-It is used to assist in defining the workflow/application template, prompting and perfecting all necessary information when the workflow/application template is executed.
-
-The runtime workflow is mainly used by Linkis.
-
-
-## **CS Architecture Diagram**
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png)
-
-## **Architecture Description:**
-
-### 1. Client
-The entrance of external access to CS, Client module provides HA function;
-[Enter Client Architecture Design] (ContextService_Client.md)
-
-### 2. Service Module
-Provide a Restful interface to encapsulate and process CS requests submitted by the client;
-[Enter Service Architecture Design] (ContextService_Service.md)
-
-### 3. ContextSearch
-The context query module provides rich and powerful query capabilities for the client to find the key-value key-value pairs of the context;
-[Enter ContextSearch architecture design](ContextService_Search.md)
-
-### 4. Listener
-The CS listener module provides synchronous and asynchronous event consumption capabilities, and has the ability to notify the Client in real time once the Zookeeper-like Key-Value is updated;
-[Enter Listener architecture design](ContextService_Listener.md)
-
-### 5. ContextCache
-The context memory cache module provides the ability to quickly retrieve the context and the ability to monitor and clean up JVM memory usage;
-[Enter ContextCache architecture design] (ContextService_Cache.md)
-
-### 6. HighAvailable
-Provide CS high availability capability;
-[Enter HighAvailable architecture design](ContextService_HighAvailable.md)
-
-### 7. Persistence
-The persistence function of CS;
-[Enter Persistence architecture design](ContextService_Persistence.md)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/PublicService.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/PublicService.md
deleted file mode 100644
index 6224be1..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/PublicService.md
+++ /dev/null
@@ -1,34 +0,0 @@
-
-## **Background**
-
-PublicService is a comprehensive service composed of multiple sub-modules such as "configuration", "jobhistory", "udf", "variable", etc. Linkis 
-1.0 added label management based on version 0.9. Linkis doesn't need to set the parameters every time during the execution of different jobs.
-Many variables, functions and configurations can be reused after the user completes the settings once, and of course that they can also be shared with other users.
-
-## **Architecture diagram**
-
-![Diagram](../../Images/Architecture/linkis-publicService-01.png)
-
-## **Architecture Introduction**
-
-1. linkis-configuration:Provides query and save operations for global settings and general settings, especially engine configuration parameters.
-
-2. linkis-jobhistory:Specially used for storage and query of historical execution task, users can obtain the historical tasks through the interface provided by "jobhistory", include logs, status and execution content.
-At the same time, the historical task also support the paging query operation.The administrator can view all the historical tasks, but the ordinary users can only view their own tasks.
-
-3. Linkis-udf:Provides the user function management capability in Linkis, which can be divided into shared functions, personal functions, system functions and the functions used by engine.
-Once the user selects one, it will be automatically loaded for users to directly quote in the code and reuse between different scripts when the engine starting. 
-
-4. Linkis-variable:Provides the global variable management capability in Linkis, store and query the user-defined global variables。
-
-5. linkis-instance-label:Provides two modules named "label server" and "label client" for labeling Engine and EM. It also provides node-based label addition, deletion, modification and query capabilities.
-The main functions are as follows:
-
--   Provides resource management capabilities for some specific labels to assist RM in more refined resource management.
-
--   Provides labeling capabilities for users. The user label will be automatically added for judgment when applying for the engine. 
-
--   Provides the label analysis module, which can parse the users' request into a bunch of labels。
-
--   With the ability of node label management, it is mainly used to provide the label  CRUD capability of the node and the label resource management to manage the resources of certain labels, marking the maximum resource, minimum resource and used resource of a Label.
-
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/README.md
deleted file mode 100644
index c9ddf68..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/README.md
+++ /dev/null
@@ -1,91 +0,0 @@
-PublicEnhencementService (PS) architecture design
-=====================================
-
-PublicEnhancementService (PS): Public enhancement service, a module that provides functions such as unified configuration management, context service, physical library, data source management, microservice management, and historical task query for other microservice modules.
-
-![](../../Images/Architecture/PublicEnhencementArchitecture.png)
-
-Introduction to the second-level module:
-==============
-
-BML material library
----------
-
-It is the linkis material management system, which is mainly used to store various file data of users, including user scripts, resource files, third-party Jar packages, etc., and can also store class libraries that need to be used when the engine runs.
-
-| Core Class | Core Function |
-|-----------------|------------------------------------|
-| UploadService | Provide resource upload service |
-| DownloadService | Provide resource download service |
-| ResourceManager | Provides a unified management entry for uploading and downloading resources |
-| VersionManager | Provides resource version marking and version management functions |
-| ProjectManager | Provides project-level resource management and control capabilities |
-
-Unified configuration management
--------------------------
-
-Configuration provides a "user-engine-application" three-level configuration management solution, which provides users with the function of configuring custom engine parameters under various access applications.
-
-| Core Class | Core Function |
-|----------------------|--------------------------------|
-| CategoryService | Provides management services for application and engine catalogs |
-| ConfigurationService | Provides a unified management service for user configuration |
-
-ContextService context service
-------------------------
-
-ContextService is used to solve the problem of data and information sharing across multiple systems in a data application development process.
-
-| Core Class | Core Function |
-|---------------------|------------------------------------------|
-| ContextCacheService | Provides a cache service for context information |
-| ContextClient | Provides the ability for other microservices to interact with the CSServer group |
-| ContextHAManager | Provide high-availability capabilities for ContextService |
-| ListenerManager | The ability to provide a message bus |
-| ContextSearch | Provides query entry |
-| ContextService | Implements the overall execution logic of the context service |
-
-Datasource data source management
---------------------
-
-Datasource provides the ability to connect to different data sources for other microservices.
-
-| Core Class | Core Function |
-|-------------------|--------------------------|
-| datasource-server | Provide the ability to connect to different data sources |
-
-InstanceLabel microservice management
------------------------
-
-InstanceLabel provides registration and labeling functions for other microservices connected to linkis.
-
-| Core Class | Core Function |
-|-----------------|--------------------------------|
-| InsLabelService | Provides microservice registration and label management functions |
-
-Jobhistory historical task management
-----------------------
-
-Jobhistory provides users with linkis historical task query, progress, log display related functions, and provides a unified historical task view for administrators.
-
-| Core Class | Core Function |
-|------------------------|----------------------|
-| JobHistoryQueryService | Provide historical task query service |
-
-Variable user-defined variable management
---------------------------
-
-Variable provides users with functions related to the storage and use of custom variables.
-
-| Core Class | Core Function |
-|-----------------|-------------------------------------|
-| VariableService | Provides functions related to the storage and use of custom variables |
-
-UDF user-defined function management
----------------------
-
-UDF provides users with the function of custom functions, which can be introduced by users when writing code.
-
-| Core Class | Core Function |
-|------------|------------------------|
-| UDFService | Provide user-defined function service |
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/README.md
deleted file mode 100644
index 7f5acde..0000000
--- a/Linkis-Doc-master/en_US/Architecture_Documents/README.md
+++ /dev/null
@@ -1,18 +0,0 @@
-## 1. Document Structure
-
-Linkis 1.0 divides all microservices into three categories: public enhancement services, computing governance services, and microservice governance services. The following figure shows the architecture of Linkis 1.0.
-
-![Linkis1.0 Architecture Figure](./../Images/Architecture/Linkis1.0-architecture.png)
-
-The specific responsibilities of each category are as follows:
-
-1. Public enhancement services are the material library services, context services, data source services and public services that Linkis 0.X has provided.
-2. The microservice governance services are Spring Cloud Gateway, Eureka and Open Feign already provided by Linkis 0.X, and Linkis 1.0 will also provide support for Nacos
-3. Computing governance services are the core focus of Linkis 1.0, from submission, preparation to execution, overall three stages to comprehensively upgrade Linkis's ability to perform control over user tasks.
-
-The following is a directory listing of Linkis1.0 architecture documents:
-
-1. The characteristics of Linkis1.0's architecture , please read [The difference between Linkis1.0 and Linkis0.x](DifferenceBetween1.0&0.x.md).
-2. Linkis1.0 public enhancement service related documents, please read [Public Enhancement Service](Public_Enhancement_Services/README.md).
-3. Linkis1.0 microservice governance related documents, please read [Microservice Governance](Microservice_Governance_Services/README.md).
-4. Linkis1.0 computing governance service related documents, please read [Computation Governance Service](Computation_Governance_Services/README.md).
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Deployment_Documents/Cluster_Deployment.md b/Linkis-Doc-master/en_US/Deployment_Documents/Cluster_Deployment.md
deleted file mode 100644
index 57f3118..0000000
--- a/Linkis-Doc-master/en_US/Deployment_Documents/Cluster_Deployment.md
+++ /dev/null
@@ -1,98 +0,0 @@
-Introduction to Distributed Deployment Scheme
-==================
-
-Linkis's stand-alone deployment is simple, but it cannot be used in a production environment, because too many processes on the same server will make the server too stressful. The choice of deployment plan is related to the company's user scale, user habits, and the number of simultaneous users of the cluster. Generally speaking, we will choose the deployment method based on the number of simultaneous users using Linkis and the user's preference for the execution engine.
-
-1.Multi-node deployment method reference
-------------------------------------------
-
-Linkis1.0 still maintains the SpringCloud-based microservice architecture, in which each microservice supports multiple active deployment schemes. Of course, different microservices play different roles in the system. Some microservices are called frequently, and more It may be in a high load situation. **On the machine where EngineConnManager is installed, the memory load of the machine will be relatively high because the user's engine process will be started, and the load of other type [...]
-
-EngineConnManager Total resources used = total memory + total number of cores =
-Number of people online at the same time \* (All types of engines occupy memory) \*maximum concurrency per user + number of people online at the same time \*
-(total memory occupied by all types of engine conns) \*maximum concurrency per user
-
-For example, when only spark, hive, and python engines are used and the maximum concurrency of a single user is 1, 50 people are used at the same time, Spark's driver memory is 1G, and Hive
-Client memory 1G, python client 1G, each engine uses 1 core, then it is 50 \*(1+1+1)G \*
-1 + 50 \*(1+1+1) cores\*1 = 150G memory + 150 CPU cores.
-
-During distributed deployment, the memory occupied by the microservice itself can be calculated according to each 2G memory. In the case of a large number of users, it is recommended to increase the memory of ps-publicservice to 6G, and it is recommended to reserve 10G of memory as a buffer.
-The following configuration assumes that **each user starts two engines at the same time as an example**. **For a machine with 64G memory**, the reference configuration is as follows:
-
-- 10-50 people online at the same time
-
-> **Server configuration recommended** 4 servers, named S1, S2, S3, S4
-
-| Service | Host name | Remark |
-|---------------|-----------|------------------|
-| cg-engineconnmanager | S1, S2 | Each machine is deployed separately |
-| Other services | S3, S4 | Eureka high availability deployment |
-
-- 50-100 people online at the same time
-
-> **Server configuration recommendation**: 6 servers, named S1, S2, S3, S4, S5, S6
-
-| Service | Host name | Remark |
-|----------------------|-----------|------------------|
-| cg-engineconnmanager | S1-S4 | Each machine is deployed separately |
-| Other services | S5, S6 | Eureka high availability deployment |
-
-- The number of simultaneous users 100-300
-
-**Recommended server configuration**: 12 servers, named S1, S2...S12
-
-| Service | Host name | Remark |
-|----------------------|-----------|------------------|
-| cg-engineconnmanager | S1-S10 | Each machine is deployed separately |
-| Other services | S11, S12 | Eureka high availability deployment |
-
-- 300-500 people at the same time
-
-> **Server configuration recommendation**: 20 servers, named S1, S2...S20
-
-| Service | Host name | Remark |
-|----------------------|-----------|-----------------|
-| cg-engineconnmanager | S1-S18 | Each machine is deployed separately |
-| Other services | S19, S20 | Eureka high-availability deployment, some microservices can be expanded if the request volume is tens of thousands, and the current active-active deployment can support thousands of users in the industry |
-
-- More than 500 users at the same time (estimated based on 800 people online at the same time)
-
-> **Server configuration recommendation**: 34 servers, named S1, S2...S34
-
-| Service | Host name | Remark |
-|----------------------|-----------|------------------------------|
-| cg-engineconnmanager | S1-S32 | Each machine is deployed separately |
-| Other services | S33, S34 | Eureka high-availability deployment, some microservices can be expanded if the request volume is tens of thousands, and the current active-active deployment can support thousands of users in the industry |
-
-2.Linkis microservices distributed deployment configuration parameters
----------------------------------
-
-In linkis1.0, we have optimized and integrated the startup parameters. Some important startup parameters of each microservice are loaded through the conf/linkis-env.sh file, such as the microservice IP, port, registry address, etc. The way to modify the parameters has changed a little. Take the active-active deployment of the machines **server1 and server2** as an example, in order to allow eureka to register with each other.
-
-On the server1 machine, you need to change the value in **conf/linkis-env.sh**
-
-``
-EUREKA_URL=http://$EUREKA_INSTALL_IP:$EUREKA_PORT/eureka/
-``
-
-change into:
-
-``
-EUREKA_URL=http://$EUREKA_INSTALL_IP:$EUREKA_PORT/eureka/,http:/server2:port/eureka/
-``
-
-In the same way, on the server2 machine, you need to change the value in **conf/linkis-env.sh**
-
-``
-EUREKA_URL=http://$EUREKA_INSTALL_IP:$EUREKA_PORT/eureka/
-``
-
-change into:
-
-``
-EUREKA_URL=http://$EUREKA_INSTALL_IP:$EUREKA_PORT/eureka/,http:/server1:port/eureka/
-``
-
-After the modification, start the microservice, enter the eureka registration interface from the web side, you can see that the microservice has been successfully registered to eureka, and the DS
-Replicas will also display the replica nodes adjacent to the cluster.
-
diff --git a/Linkis-Doc-master/en_US/Deployment_Documents/EngineConnPlugin_installation_document.md b/Linkis-Doc-master/en_US/Deployment_Documents/EngineConnPlugin_installation_document.md
deleted file mode 100644
index 990f55b..0000000
--- a/Linkis-Doc-master/en_US/Deployment_Documents/EngineConnPlugin_installation_document.md
+++ /dev/null
@@ -1,82 +0,0 @@
-EngineConnPlugin installation document
-===============================
-
-This article mainly introduces the use of Linkis EngineConnPlugins, mainly from the aspects of compilation and installation.
-
-## 1. Compilation and packaging of EngineConnPlugins
-
-After linkis1.0, the engine is managed by EngineConnManager, and the EngineConnPlugin (ECP) supports real-time effectiveness.
-In order to facilitate the EngineConnManager to be loaded into the corresponding EngineConnPlugin by labels, it needs to be packaged according to the following directory structure (take hive as an example):
-```
-hive: engine home directory, must be the name of the engine
-└── dist # Dependency and configuration required for engine startup, different versions of the engine need to be in this directory to prevent the corresponding version directory
-    └── v1.2.1 #Must start with ‘v’ and add engine version number ‘1.2.1’
-        └── conf # Configuration file directory required by the engine
-        └── lib # Dependency package required by EngineConnPlugin
-└── plugin #EngineConnPlugin directory, this directory is used for engine management service package engine startup command and resource application
-    └── 1.2.1 # Engine version
-        └── linkis-engineplugin-hive-1.0.0-RC1.jar #Engine module package (only need to place a separate engine package)
-```
-If you are adding a new engine, you can refer to hive's assembly configuration method, source code directory: linkis-engineconn-plugins/engineconn-plugins/hive/src/main/assembly/distribution.xml
-## 2. Engine Installation
-### 2.1 Plugin package installation
-1.First, confirm the dist directory of the engine: wds.linkis.engineconn.home (get the value of this parameter from ${LINKIS_HOME}/conf/linkis.properties), this parameter is used by EngineConnPluginServer to read the configuration file that the engine depends on And third-party Jar packages. If the parameter (wds.linkis.engineconn.dist.load.enable=true) is set, the engine in this directory will be automatically read and loaded into the Linkis BML (material library).
-
-2.Second, confirm the engine Jar package directory:
-wds.linkis.engineconn.plugin.loader.store.path, which is used by EngineConnPluginServer to read the actual implementation Jar of the engine.
-
-It is highly recommended to specify **wds.linkis.engineconn.home and wds.linkis.engineconn.plugin.loader.store.path as** the same directory, so that you can directly unzip the engine ZIP package exported by maven into this directory, such as: Place it in the ${LINKIS_HOME}/lib/linkis-engineconn-plugins directory.
-
-```
-${LINKIS_HOME}/lib/linkis-engineconn-plugins:
-└── hive
-    └── dist
-    └── plugin
-└── spark
-    └── dist
-    └── plugin
-```
-
-If the two parameters do not point to the same directory, you need to place the dist and plugin directories separately, as shown in the following example:
-
-```
-## dist directory
-${LINKIS_HOME}/lib/linkis-engineconn-plugins/dist:
-└── hive
-    └── dist
-└── spark
-    └── dist
-## plugin directory
-${LINKIS_HOME}/lib/linkis-engineconn-plugins/plugin:
-└── hive
-    └── plugin
-└── spark
-    └── plugin
-```
-### 2.2 Configuration modification of management console (optional)
-
-The configuration of the Linkis1.0 management console is managed according to the engine label. If the new engine has configuration parameters, you need to insert the corresponding configuration parameters in the Configuration, and you need to insert the parameters in three tables:
-
-```
-linkis_configuration_config_key: Insert the key and default values of the configuration parameters of the engin
-linkis_manager_label: Insert engine label such as hive-1.2.1
-linkis_configuration_category: Insert the catalog relationship of the engine
-linkis_configuration_config_value: Insert the configuration that the engine needs to display
-```
-
-If it is an existing engine and a new version is added, you can modify the version of the corresponding engine in the linkis_configuration_dml.sql file for execution
-
-### 2.3 Engine refresh
-
-1.	The engine supports real-time refresh. After the engine is placed in the corresponding directory, Linkis1.0 provides a method to load the engine without shutting down the server, and just send a request to the linkis-engineconn-plugin-server service through the restful interface, that is, the actual deployment of the service Ip+port, the request interface is http://ip:port/api/rest_j/v1/rpc/receiveAndReply, the request method is POST, the request body is {"method":"/enginePlugin/engin [...]
-
-2.	Restart refresh: the engine catalog can be forced to refresh by restarting
-
-```
-### cd to the sbin directory, restart linkis-engineconn-plugin-server
-cd /Linkis1.0.0/sbin
-## Execute linkis-daemon script
-sh linkis-daemon.sh restart linkis-engine-plugin-server
-```
-
-3.Check whether the engine refresh is successful: If you encounter problems during the refresh process and need to confirm whether the refresh is successful, you can check whether the last_update_time of the linkis_engine_conn_plugin_bml_resources table in the database is the time when the refresh is triggered.
diff --git "a/Linkis-Doc-master/en_US/Deployment_Documents/Images/\345\210\206\345\270\203\345\274\217\351\203\250\347\275\262\345\276\256\346\234\215\345\212\241.png" "b/Linkis-Doc-master/en_US/Deployment_Documents/Images/\345\210\206\345\270\203\345\274\217\351\203\250\347\275\262\345\276\256\346\234\215\345\212\241.png"
deleted file mode 100644
index 8cd86c5..0000000
Binary files "a/Linkis-Doc-master/en_US/Deployment_Documents/Images/\345\210\206\345\270\203\345\274\217\351\203\250\347\275\262\345\276\256\346\234\215\345\212\241.png" and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Deployment_Documents/Installation_Hierarchical_Structure.md b/Linkis-Doc-master/en_US/Deployment_Documents/Installation_Hierarchical_Structure.md
deleted file mode 100644
index 3873f0a..0000000
--- a/Linkis-Doc-master/en_US/Deployment_Documents/Installation_Hierarchical_Structure.md
+++ /dev/null
@@ -1,198 +0,0 @@
-Installation directory structure
-============
-
-The directory structure of Linkis1.0 is very different from the 0.X version. Each microservice in 0.X has a root directory that exists independently. The main advantage of this directory structure is that it is easy to distinguish microservices and facilitate individual Microservices are managed, but there are some obvious problems:
-
-1.	The microservice catalog is too complicated and it is not convenient to switch catalog management
-2.	There is no unified startup script, which makes it more troublesome to start and stop microservices
-3.	There are a large number of duplicate service configurations, and the same configuration often needs to be modified in many places
-4.	There are a large number of repeated Lib dependencies, which increases the size of the installation package and the risk of dependency conflicts
-
-Therefore, in Linkis 1.0, we have greatly optimized and adjusted the installation directory structure, reducing the number of microservice directories, reducing the jar packages that are repeatedly dependent, and reusing configuration files and microservice management scripts as much as possible. Mainly reflected in the following aspects:
-
-1.The bin folder is no longer provided for each microservice, and modified to be shared by all microservices.
-> The Bin folder is modified to the installation directory, which is mainly used to install Linkis1.0 and check the environment status. The new sbin directory provides one-click start and stop for Linkis, and provides independent start and stop for all microservices by changing parameters.
-
-2.No longer provide a separate conf directory for each microservice, and modify it to be shared by all microservices.
-> The Conf folder contains two aspects of content. On the one hand, it is the configuration information shared by all microservices. This type of configuration information contains information that users can customize configuration according to their own environment; on the other hand, it is the special characteristics of each microservice. Configuration, under normal circumstances, users do not need to change by themselves.
-
-3.The lib folder is no longer provided for each microservice, and modified to be shared by all microservices
-> The Lib folder also contains two aspects of content, on the one hand, the common dependencies required by all microservices; on the other hand, the special dependencies required by each microservice.
-
-4.The log directory is no longer provided for each microservice, modified to be shared by all microservices
-> The Log directory contains log files of all microservices.
-
-The simplified directory structure of Linkis1.0 is as follows.
-
-````
-├── bin ──installation directory
-│ ├── checkEnv.sh ── Environmental variable detection
-│ ├── checkServices.sh ── Microservice status check
-│ ├── common.sh ── Some public shell functions
-│ ├── install-io.sh ── Used for dependency replacement during installation
-│ └── install.sh ── Main script of Linkis installation
-├── conf ──configuration directory
-│ ├── application-eureka.yml 
-│ ├── application-linkis.yml    ──Microservice general yml
-│ ├── linkis-cg-engineconnmanager-io.properties
-│ ├── linkis-cg-engineconnmanager.properties
-│ ├── linkis-cg-engineplugin.properties
-│ ├── linkis-cg-entrance.properties
-│ ├── linkis-cg-linkismanager.properties
-│ ├── linkis-computation-governance
-│ │   └── linkis-client
-│ │       └── linkis-cli
-│ │           ├── linkis-cli.properties
-│ │           └── log4j2.xml
-│ ├── linkis-env.sh   ──linkis environment properties
-│ ├── linkis-et-validator.properties
-│ ├── linkis-mg-gateway.properties
-│ ├── linkis.properties  ──linkis global properties
-│ ├── linkis-ps-bml.properties
-│ ├── linkis-ps-cs.properties
-│ ├── linkis-ps-datasource.properties
-│ ├── linkis-ps-publicservice.properties
-│ ├── log4j2.xml
-│ ├── proxy.properties(Optional)
-│ └── token.properties(Optional)
-├── db ──database DML and DDL file directory
-│ ├── linkis\_ddl.sql ──Database table definition SQL
-│ ├── linkis\_dml.sql ──Database table initialization SQL
-│ └── module ──Contains DML and DDL files of each microservice
-├── lib ──lib directory
-│ ├── linkis-commons ──Common dependency package
-│ ├── linkis-computation-governance ──The lib directory of the computing governance module
-│ ├── linkis-engineconn-plugins ──lib directory of all EngineConnPlugins
-│ ├── linkis-public-enhancements ──lib directory of public enhancement services
-│ └── linkis-spring-cloud-services ──SpringCloud lib directory
-├── logs ──log directory
-│ ├── linkis-cg-engineconnmanager-gc.log
-│ ├── linkis-cg-engineconnmanager.log
-│ ├── linkis-cg-engineconnmanager.out
-│ ├── linkis-cg-engineplugin-gc.log
-│ ├── linkis-cg-engineplugin.log
-│ ├── linkis-cg-engineplugin.out
-│ ├── linkis-cg-entrance-gc.log
-│ ├── linkis-cg-entrance.log
-│ ├── linkis-cg-entrance.out
-│ ├── linkis-cg-linkismanager-gc.log
-│ ├── linkis-cg-linkismanager.log
-│ ├── linkis-cg-linkismanager.out
-│ ├── linkis-et-validator-gc.log
-│ ├── linkis-et-validator.log
-│ ├── linkis-et-validator.out
-│ ├── linkis-mg-eureka-gc.log
-│ ├── linkis-mg-eureka.log
-│ ├── linkis-mg-eureka.out
-│ ├── linkis-mg-gateway-gc.log
-│ ├── linkis-mg-gateway.log
-│ ├── linkis-mg-gateway.out
-│ ├── linkis-ps-bml-gc.log
-│ ├── linkis-ps-bml.log
-│ ├── linkis-ps-bml.out
-│ ├── linkis-ps-cs-gc.log
-│ ├── linkis-ps-cs.log
-│ ├── linkis-ps-cs.out
-│ ├── linkis-ps-datasource-gc.log
-│ ├── linkis-ps-datasource.log
-│ ├── linkis-ps-datasource.out
-│ ├── linkis-ps-publicservice-gc.log
-│ ├── linkis-ps-publicservice.log
-│ └── linkis-ps-publicservice.out
-├── pid ──Process ID of all microservices
-│ ├── linkis\_cg-engineconnmanager.pid ──EngineConnManager microservice
-│ ├── linkis\_cg-engineconnplugin.pid ──EngineConnPlugin microservice
-│ ├── linkis\_cg-entrance.pid ──Engine entrance microservice
-│ ├── linkis\_cg-linkismanager.pid ──linkis manager microservice
-│ ├── linkis\_mg-eureka.pid ──eureka microservice
-│ ├── linkis\_mg-gateway.pid ──gateway microservice
-│ ├── linkis\_ps-bml.pid ──material library microservice
-│ ├── linkis\_ps-cs.pid ──Context microservice
-│ ├── linkis\_ps-datasource.pid ──Data source microservice
-│ └── linkis\_ps-publicservice.pid ──public microservice
-└── sbin ──microservice start and stop script directory
-    ├── ext ──Start and stop script directory of each microservice
-    ├── linkis-daemon.sh ── Quick start and stop, restart a single microservice script
-    ├── linkis-start-all.sh ── Start all microservice scripts with one click
-    └── linkis-stop-all.sh ── Stop all microservice scripts with one click
-````
-
-# Configuration item modification
-
-After executing the install.sh in the bin directory to complete the Linkis installation, you need to modify the configuration items. All configuration items are located in the con directory. Normally, you need to modify the three configurations of db.sh, linkis.properties, and linkis-env.sh For documentation, project installation and configuration, please refer to the article "Linkis1.0 Installation"
-
-# Microservice start and stop
-
-After modifying the configuration items, you can start the microservice in the sbin directory. The names of all microservices are as follows:
-
-````
-├── linkis-cg-engineconnmanager  ──engine management service
-├── linkis-cg-engineplugin  ──EngineConnPlugin management service
-├── linkis-cg-entrance  ──computing governance entrance service
-├── linkis-cg-linkismanager  ──computing governance management service
-├── linkis-mg-eureka  ──microservice registry service
-├── linkis-mg-gateway  ──Linkis gateway service
-├── linkis-ps-bml  ──material library service
-├── linkis-ps-cs  ──context service
-├── linkis-ps-datasource  ──data source service
-└── linkis-ps-publicservice  ──public service
-````
-**Microservice abbreviation**:
-
-| Abbreviation | Full English Name | Full Chinese Name |
-|------|-------------------------|------------|
-| cg | Computation Governance | Computing Governance |
-| mg | Microservice Covernance | Microservice Governance |
-| ps | Public Enhancement Service | Public Enhancement Service |
-
-In the past, to start and stop a single microservice, you need to enter the bin directory of each microservice and execute the start/stop script. When there are many microservices, it is troublesome to start and stop. A lot of additional directory switching operations are added. Linkis1.0 will all The scripts related to the start and stop of microservices are placed in the sbin directory, and only a single entry script needs to be executed.
-
-**Under the Linkis/sbin directory**:
-
-1.Start all microservices at once:
-
-````
-sh linkis-start-all.sh
-````
-
-2.Shut down all microservices at once
-
-````
-sh linkis-stop-all.sh
-````
-
-3.Start a single microservice (the service name needs to be removed from the linkis prefix, such as mg-eureka)
-````
-sh linkis-daemon.sh start service-name
-````
-For example: 
-````
-sh linkis-daemon.sh start mg-eureka
-````
-
-4.Shut down a single microservice
-````
-sh linkis-daemon.sh stop service-name
-````
-For example: 
-````
-sh linkis-daemon.sh stop mg-eureka
-````
-
-5.Restart a single microservice
-````
-sh linkis-daemon.sh restart service-name
-````
-For example: 
-````
-sh linkis-daemon.sh restart mg-eureka
-````
-
-6.View the status of a single microservice
-````
-sh linkis-daemon.sh status service-name
-````
-For example: 
-````
-sh linkis-daemon.sh status mg-eureka
-````
diff --git a/Linkis-Doc-master/en_US/Deployment_Documents/Quick_Deploy_Linkis1.0.md b/Linkis-Doc-master/en_US/Deployment_Documents/Quick_Deploy_Linkis1.0.md
deleted file mode 100644
index b74dbd9..0000000
--- a/Linkis-Doc-master/en_US/Deployment_Documents/Quick_Deploy_Linkis1.0.md
+++ /dev/null
@@ -1,246 +0,0 @@
-# Linkis1.0 Deployment document
-
-## Notes
-
-If you are new to Linkis, you can ignore this chapter, however, if you are already a Linkis user,  we recommend you reading the following article before installing or upgrading: [Brief introduction of the difference between Linkis1.0 and Linkis0.X](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/en_US/Architecture_Documents/DifferenceBetween1.0%260.x.md).
-
-Please note: Apart from the four EngineConnPlugins included in the Linkis1.0 installation package by default: Python/Shell/Hive/Spark. You can manually install other types of engines such as JDBC depending on your own needs. For details, please refer to EngineConnPlugin installation documents.
-
-Engines that Linkis1.0 has adapted by default are listed below:
-
-| Engine Type   | Adaptation Situation   | Included in official installation package |
-| ------------- | ---------------------- | ----------------------------------------- |
-| Python        | Adapted in 1.0         | Included                                  |
-| JDBC          | Adapted in 1.0         | **Not Included**                          |
-| Shell         | Adapted in 1.0         | Included                                  |
-| Hive          | Adapted in 1.0         | Included                                  |
-| Spark         | Adapted in 1.0         | Included                                  |
-| Pipeline      | Adapted in 1.0         | **Not Included**                          |
-| Presto        | **Not adapted in 1.0** | **Not Included**                          |
-| ElasticSearch | **Not adapted in 1.0** | **Not Included**                          |
-| Impala        | **Not adapted in 1.0** | **Not Included**                          |
-| MLSQL         | **Not adapted in 1.0** | **Not Included**                          |
-| TiSpark       | **Not adapted in 1.0** | **Not Included**                          |
-
-## 1. Determine your installation environment 
-
-The following is the dependency information for each engine.
-
-| Engine Type | Dependency                  | Special Instructions                                         |
-| ----------- | --------------------------- | ------------------------------------------------------------ |
-| Python      | Python Environment          | If the path of logs and result sets are configured as hdfs://, then the HDFS environment is needed. |
-| JDBC        | No dependency               | If the path of logs and result sets are configured as hdfs://, then the HDFS environment is needed. |
-| Shell       | No dependency               | If the path of logs and result sets are configured as hdfs://, then the HDFS environment is needed. |
-| Hive        | Hadoop and Hive Environment |                                                              |
-| Spark       | Hadoop/Hive/Spark           |                                                              |
-                                                         
-**Requirement: At least 3G memory is required to install Linkis. **
-                                                         
-The default JVM heap memory of each microservice is 512M, and the heap memory of each microservice can be adjusted uniformly by modifying `SERVER_HEAP_SIZE`.If your computer resources are small, we suggest to modify this parameter to 128M. as follows:
-
-```bash
-    vim ${LINKIS_HOME}/config/linkis-env.sh
-```
-
-```bash
-    # java application default jvm memory.
-    export SERVER_HEAP_SIZE="128M"
-```
-
-----
-
-## 2. Linkis environment preparation
-
-### a. Fundamental software installation
-
-The following softwares must be installed:
-
-- MySQL (5.5+), How to install MySQL
-- JDK (1.8.0_141 or higher) How to install JDK
-
-### b. Create user
-
-For example: **The deploy user is hadoop**.
-
-1. Create a deploy user on the machine for installation.
-
-```bash
-    sudo useradd hadoop  
-```
-
-2. Since the services of Linkis use  sudo -u {linux-user} to switch engines to execute jobs, the deploy user should have sudo permission and do not need to enter the password.
-
-```bash
-    vi /etc/sudoers
-```
-
-```text
-    hadoop  ALL=(ALL)       NOPASSWD: NOPASSWD: ALL
-```
-
-3. **Set the following global environment variables on each installation node so that Linkis can use Hadoop, Hive and Spark.**
-
-   Modify the .bash_rc of the deploy user, the command is as follows:
-
-```bash     
-    vim /home/hadoop/.bash_rc ##Take the deploy user hadoop as an example.
-```
-
-​		The following is an example of setting environment variables:
-
-```bash
-    #JDK
-    export JAVA_HOME=/nemo/jdk1.8.0_141
-
-    ##If you do not use Hive, Spark or other engines and do not rely on Hadoop as 			well,then there is no need to modify the following environment variables.
-    #HADOOP  
-    export HADOOP_HOME=/appcom/Install/hadoop
-    export HADOOP_CONF_DIR=/appcom/config/hadoop-config
-    #Hive
-    export HIVE_HOME=/appcom/Install/hive
-    export HIVE_CONF_DIR=/appcom/config/hive-config
-    #Spark
-    export SPARK_HOME=/appcom/Install/spark
-    export SPARK_CONF_DIR=/appcom/config/spark-config/spark-submit
-    export PYSPARK_ALLOW_INSECURE_GATEWAY=1  # Parameters must be added to Pyspark
-```
-
-4. **If you want to equip your Pyspark and Python with drawing functions, you need to install the drawing module on each installation node**. The command is as follows:
-
-```bash
-    python -m pip install matplotlib
-```
-
-### c. Preparing installation package
-
-Download the latest installation package from the Linkis release. ([Click here to enter the download page](https://github.com/WeBankFinTech/Linkis/releases))
-
-Decompress the installation package to the installation directory and modify the configuration of the decompressed file.
-
-```bash   
-    tar -xvf  wedatasphere-linkis-x.x.x-combined-package-dist.tar.gz
-```
-
-### d. Basic configuration modification(Do not rely on HDFS)
-
-```bash
-    vi config/linkis-env.sh
-```
-
-```properties
-
-    #SSH_PORT=22        #Specify SSH port. No need to configuer if the stand-alone version is installed
-    deployUser=hadoop      #Specify deploy user
-    LINKIS_INSTALL_HOME=/appcom/Install/Linkis    # Specify installation directory.
-    WORKSPACE_USER_ROOT_PATH=file:///tmp/hadoop    # Specify user root directory. Generally used to store user's script and log files, it's user's workspace. 
-    RESULT_SET_ROOT_PATH=file:///tmp/linkis   # The result set file path, used to store the result set files of the Job.
-	ENGINECONN_ROOT_PATH=/appcom/tmp #Store the installation path of ECP. A local directory where deploy user has write permission.
-    ENTRANCE_CONFIG_LOG_PATH=file:///tmp/linkis/  #Entrance's log path
-
-    ## LDAP configuration. Linkis only supports deploy user login by default, you need to configure the following parameters to support multi-user login.
-    #LDAP_URL=ldap://localhost:1389/ 
-    #LDAP_BASEDN=dc=webank,dc=com
-```
-
-### e. Basic configuration modification(Rely on HDFS/Hive/Spark)
-
-```bash
-     vi config/linkis-env.sh
-```
-
-```properties
-    SSH_PORT=22       #Specify SSH port. No need to configuer if the stand-alone version is installed
-    deployUser=hadoop      #Specify deploy user
-    WORKSPACE_USER_ROOT_PATH=file:///tmp/hadoop     #Specify user root directory. Generally used to store user's script and log files, it's user's workspace.
-    RESULT_SET_ROOT_PATH=hdfs:///tmp/linkis   # The result set file path, used to store the result set files of the Job.
-	ENGINECONN_ROOT_PATH=/appcom/tmp #Store the installation path of ECP. A local directory where deploy user has write permission.
-    ENTRANCE_CONFIG_LOG_PATH=hdfs:///tmp/linkis/  #Entrance's log path
-
-    #1.0 supports multi-Yarn clusters, therefore, YARN_RESTFUL_URL must be configured
- 	YARN_RESTFUL_URL=http://127.0.0.1:8088  #URL of Yarn's ResourceManager
-
-    # If you want to use it with Scriptis, for CDH version of hive, you need to set the following parameters.(For the community version of Hive, you can leave out the following configuration.)
-    HIVE_META_URL=jdbc://...   #URL of Hive metadata database
-    HIVE_META_USER=   # username of the Hive metadata database 
-    HIVE_META_PASSWORD=    # password of the Hive metadata database
-    
-    # set the conf directory of hadoop/hive/spark
-    HADOOP_CONF_DIR=/appcom/config/hadoop-config  #hadoop's conf directory
-    HIVE_CONF_DIR=/appcom/config/hive-config   #hive's conf directory
-    SPARK_CONF_DIR=/appcom/config/spark-config #spark's conf directory
-
-    ## LDAP configuration. Linkis only supports deploy user login by default, you need to configure the following parameters to support multi-user login.
-    #LDAP_URL=ldap://localhost:1389/ 
-    #LDAP_BASEDN=dc=webank,dc=com
-    
-    ##If your spark version is not 2.4.3, you need to modify the following parameter:
-    #SPARK_VERSION=3.1.1
-
-    ##:If your hive version is not 1.2.1, you need to modify the following parameter:
-    #HIVE_VERSION=2.3.3
-```
-
-### f. Modify the database configuration
-
-```bash   
-    vi config/db.sh 
-```
-
-```properties    
-
-    # set the connection information of the database
-    # including ip address, database's name, username and port
-    # Mainly used to store user's customized variables, configuration parameters, UDFs, and samll functions, and to provide underlying storage of the JobHistory.
-    MYSQL_HOST=
-    MYSQL_PORT=
-    MYSQL_DB=
-    MYSQL_USER=
-    MYSQL_PASSWORD=
-```
-
-## 3. Installation and Startup
-
-### 1. Execute the installation script:
-
-```bash
-    sh bin/install.sh
-```
-
-### 2. Installation steps
-
-- The install.sh script will ask you whether to initialize the database and import the metadata. 
-
-It is possible that a user might repeatedly run the install.sh script and results in clearing all data in databases. Therefore, each time the install.sh is executed, user will be asked if they need to initialize the database and import the metadata.
-
-Please select yes on the **first installation**.
-
-**Please note: If you are upgrading the existing environment of Linkis from 0.X to 1.0, please do not choose yes directly,  refer to Linkis1.0 Upgrade Guide first.**
-
-### 3. Whether install successfully 
-
-You can check whether the installation is successful or not by viewing the logs printed on the console. 
-
-If there is an error message, check the specific reason for that error or refer to FAQ for help.
-
-### 4. Linkis quick startup
-
-(1). Start services
-
-Run the following commands on the installation directory to start all services.
-
-```bash  
-  sh sbin/linkis-start-all.sh
-```
-
-(2). Check if start successfully 
-
-You can check the startup status of the services on the Eureka, here is the way to check:
-
-Open http://${EUREKA_INSTALL_IP}:${EUREKA_PORT} on the browser and check if services have registered successfully. 
-
-If you have not specified EUREKA_INSTALL_IP and EUREKA_INSTALL_IP in config.sh, then the HTTP address is http://127.0.0.1:20303
-
-As shown in the figure below, if all of the following micro-services are registered on theEureka, it means that they've started successfully and are able to work.
-
-![Linkis1.0_Eureka](../Images/deployment/Linkis1.0_combined_eureka.png)
-
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Contributing.md b/Linkis-Doc-master/en_US/Development_Documents/Contributing.md
deleted file mode 100644
index 28ea896..0000000
--- a/Linkis-Doc-master/en_US/Development_Documents/Contributing.md
+++ /dev/null
@@ -1,195 +0,0 @@
-# Contributing
-
-Thank you very much for contributing to the Linkis project! Before participating in the contribution, please read the following guidelines carefully.
-
-## 1. Contribution category
-
-### 1.1 Bug feedback and fix
-
-We suggest that whether it is bug feedback or repair, you should create an issue first to describe the status of the bug in detail, so as to help the community to find and review issues and codes through issue records. Bug feedback issues usually need to include a complete description
-**Bug** information and reproducible scenarios, so that the community can quickly locate the cause of the bug and fix it. Opened issues that contain #bug label all need to be fixed.
-
-### 1.2 Functional communication, implementation and refactoring
-
-In the communication process, please elaborate the details, mechanisms and using scenarios of the new function(or refactoring). This can promote the function(or refactoring) to be implemented better and faster.
-If you plan to implement a major feature (or refactoring), be sure to communicate with the team through **Issue** or other methods, so that everyone can move forward in the most efficient way. An open Issue containing the #feature tag means that there are new functions need to be implemented. And open issues including #Enhancement tags always means that needs to be improved for refactoring.
-
-
-### 1.3 Issue Q&A
-
-Helping to answer the usage questions in the Issue is a very valuable way to contribute to the Linkis community; There will always be new users keeping coming in. While helping new users, you can also show your expertise.
-
-### 1.4 Documentation improvements
-
-Linkis User Manual Documents are maintained in the Linkis-Doc project of github, you can edit the markdown file in the project and improve the document by submit a pr.
-
-## 2. Contribution process
-
-### 2.1 Branch structure
-
-The Linkis source code may contain some temporary branches, but there are only three branches as followed that are really meaningful:
-
-```
-master: The source code of the last stable release, and occassionally may have several hotfix submissions
-branch-0.10.0: The latest stable version
-dev-1.0.0: Main development branch
-```
-
-### 2.2 Development Guidelines
-
-Linkis front-end and back-end code share the same code repository, but they are separated in development. Before embarking on development, please fork a copy of Linkis project to your own Github Repositories. When developing, please do it based on your own Github Repositories.
-
-We recommend cloning the dev-1.0.0 branch for development, so there will be much less conflicts on merging when submitting a PR to the Linkis main project
-Much smaller
-
-```
-git clone https://github.com/yourname/Linkis.git --branch dev-1.0.0
-```
-
-#### 2.2.1 Backend
-
-The user configuration is under the project root directory /config/, the project startup script and the upgrade patch script are under the project root directory /bin/.
-The back-end code and core configuration are in the server/ directory, and the log is in the project root directory /log/. 
-The root directory of the project mentioned here refers to the directory configured by the environment variable LINKIS_HOME, and the environment variable needs to be configured during the development of the IDE.
-For example, Idea regarding the priority of environment variable loading from  high to low: Environment configured in Run/Debug Configurations
-variables —> System environment variables cached by the IDE.
-
-**2.2.1.1** Directory structure
-
-```
-1. Script
-```
-```
-├── assembly-package/bin # script directory
- ├── install.sh # One-click deployment script
- ├── checkEnv.sh # Environment check script
- └── common.sh # Common script function
-```
-```
-├── sbin # script directory
- ├── linkis-daemon.sh # Single service start and stop, status detection script
- ├── linkis-start-all.sh # One-click start script
- ├── linkis-stop-all.sh # One-click stop script
- └── ext # Separate service script directory
-    ├── linkis-xxx.sh # The startup script of a service
-    ├── linkis-xxx.sh
-    ├── ...
-```
-    
-```
-2. Configuration
-```
-```
-├── assembly-package/config # User configuration directory
- ├── linkis-env.sh # Configuration variable settings for one-click deployment
- ├── db.sh # One-click deployment database configuration
-```
-```
-3. Code directory structure
-See Linkis code directory structure for details
-4. Log directory
-```
-```
-├── logs # log root directory
-```
-**2.2.1.2** Environment variables
-
-
-```
-Configure system environment variable or IDE environment variable LINKIS_HOME, it is recommended to use IDE environment variable first.
-```
-**2.2.1.3** Database
-
-```
-1. Create the Linkis system database by yourself;
-2. Modify the corresponding information of the database in conf/db.sh and execute bin/install.sh or import directly on the database client
-db/linkis_*.sql.
-```
-
-**2.2.1.4** Configuration file
-
-Modify the application-linkis.yml file in the conf directory and the properties file corresponding to each microservice name to configure related properties.
-
-**2.2.1.5** Packaging
-
-```
-1. To package the project, you need to modify the version in /assembly/src/main/assembly/assembly.xml in the root directory, and then execute the following command in the root directory: mvn clean package;
-To package a single module, simply run mvn clean package directly in each module.
-```
-### 2.3 Pull Request Guidelines
-
-#### If you still don’t know how to initiate a PR to an open source project, please refer to this description
-
-```
-Whether it is bug fixes or new feature development, please submit a PR to the dev-1.0.0 branch.
-PR and submission name follow the principle of <type>(<scope>): <subject>. For details, please refer to Ruan Yifeng's article [Commitmessage and Change log Compilation Guide](http://www.ruanyifeng.com/blog/2016/01/commit_message_change_log.html).
-If the PR contains new features, the document update should be included in this PR.
-If this PR is not ready to merge, please add the [WIP] prefix to the head of the name (WIP = work-in-progress).
-All submissions to the dev-1.0.0 branch need to go through at least one review before they can be merged
-```
-### 2.4 Review Standard
-
-Before contributing code, you can find out what kind of submissions are popular in Review. Simply put, if a submission can bring as many gains as possible and as few side effects or risks as possible, then it will be reviewd and merged first. Submissions with high risk and low value are almost impossible to be merged, and may be rejected without even a chance to review. 
-
-**2.4.1** Gain
-
-```
-Fix the main cause of the bug
-Add or fix a feature or problem that a large number of users urgently need
-Simple and effective
-Easy to test, with test cases
-Reduce complexity and amount of code
-```
-
-#### Issues that have been discussed by the community and identified for improvement
-
-#### 2.4.2 Side effects and risks
-
-```
-Only fix the surface phenomenon of the bug
-Introduce new features with high complexity
-Add complexity to meet niche needs
-Change stable existing API or semantics
-Cause other functions to not operate normally
-Add a lot of dependencies
-Change the dependency version at will
-Submit a large amount of code or changes at once
-```
-**2.4.3 Reviewer** Note
-
-```
-Please use a constructive tone to write comments
-If you need to make changes by the submitter, please clearly state all the content that needs to be modified to complete the Pull Request
-If a PR is found to have brought new problems after the merger, the Reviewer needs to contact the PR author and communicate to resolve the problem.
-Question; if the PR author cannot be contacted, the Reviewer needs to restore the PR
-```
-## 3. advanced contribution
-
-### 3.1 About Committers (Collaborators)
-
-**3.1.1** How to become a **committer**
-
-If you have had a valuable PR for the Linkis code and it has been merged, you can contact the core development team through the official WeChat group
-Team applied to be the Committer of the Linkis project; the core development team and other Committers will vote together to decide whether or not allow you to join. If you get enough votes, you will become a Committer for the Linkis project.
-
-**3.1.2 Committer** Rights
-
-```
-You can join the official developer WeChat group, participate in discussions and make development plans
-Can manage Issues, including closing and adding tags
-Can create and manage project branches, except for master and dev-1.0.0 branches
-Can review the PR submitted to the dev-1.0.0 branch
-Can apply to be a member of Committee
-```
-### 3.2 About Committee
-
-**3.2.1** How to become a **Committee** member
-
-
-If you are a Committer of the Linkis project, and all your contributions have been recognized by other Committee members. Yes, you can apply to be a member of the Linkis Committee, and other Committee members will vote together to decide whether to allow you to join in, and if unanimously approved, you will become a member of the Linkis Committee.
-
-**3.2.2 Committee members' rights
-
-```
-You can merge PRs submitted by other Committers and contributors to the dev-1.0.0 branch
-```
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/API.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/API.md
deleted file mode 100644
index f91f8ba..0000000
--- a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/API.md
+++ /dev/null
@@ -1,143 +0,0 @@
- > When Contributor contributes new RESTful interfaces to Linkis, it is required to follow the following interface specifications for interface development.
-
-
-
-## 1. HTTP or WebSocket ?
-
-
-
-Linkis currently provides two interfaces: HTTP and WebSocket.
-
-
-
-WebSocket advantages over HTTP:
-
-
-
-- Less stress on the server
-
-- More timely information push
-
-- Interactivity is more friendly
-
-
-
-Correspondingly, WebSocket has the following disadvantages:
-
-
-
-- The WebSocket may be disconnected while using
-
-- Higher technical requirements on the front end
-
-- It is generally required to have a front-end degradation handling mechanism
-
-
-
-**We generally strongly recommend that Contributor provide the interface using WebSocket as little as possible if not necessary;**
-
-
-
-**If you think it is necessary to use WebSocket and are willing to contribute the developed functions to Linkis, we suggest you communicate with us before the development, thank you!**
-
-
-
-## 2. URL specification
-
-
-
-```
-
-/api/rest_j/v1/{applicationName}/.+
-
-/api/rest_s/v1/{applicationName}/.+
-
-```
-
-
-
-**Convention** :
-
-
-
-- rest_j indicates that the interface complies with the Jersey specification
-
-- REST_S indicates that the interface complies with the SpringMVC REST specification
-
-- v1 is the version number of the service. ** version number will be updated with the Linkis version **
-
-- {applicationName} is the name of the micro-service
-
-
-
-## 3. Interface request format
-
-
-
-```json
-
-{
-
-"method":"/api/rest_j/v1/entrance/execute",
-
-"data":{},
-
-"WebsocketTag" : "37 fcbd8b762d465a0c870684a0261c6e" / / WebSocket requests require this parameter, HTTP requests can ignore
-
-}
-
-```
-
-
-
-**Convention** :
-
-
-
-- method: The requested RESTful API URL.
-
-- data: The specific data requested.
-
-- WebSocketTag: The unique identity of a WebSocket request. This parameter is also returned by the back end for the front end to identify.
-
-
-
-## 4. Interface response format
-
-
-
-```json
-
-{" method ":"/API/rest_j/v1 / project/create ", "status" : 0, "message" : "creating success!" ,"data":{}}
-
-```
-
-
-
-**Convention** :
-
-
-
-- method: Returns the requested RESTful API URL, mainly for the WebSocket mode.
-
-- status: Returns status information, where: -1 means not login, 0 means success, 1 means error, 2 means failed validation, and 3 means no access to the interface.
-
-- data: Returns the specific data.
-
-- message: Returns a prompt message for the request. If status is not 0, message will return an error message, where data may have a stack trace field, and return the specific stack information.
-
-
-
-In addition: Different status cause different HTTP status code, under normal circumstances:
-
-
-
-- When status is 0, the HTTP status code is 200
-
-- When the status is -1, the HTTP status code is 401
-
-- When status is 1, the HTTP status code is 400
-
-- When status is 2, the HTTP status code is 412
-
-- When status is 3, the HTTP status code is 403
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Concurrent.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Concurrent.md
deleted file mode 100644
index 8adf0d0..0000000
--- a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Concurrent.md
+++ /dev/null
@@ -1,17 +0,0 @@
-1. [**Compulsory**] Make sure getting a singleton object to be thread-safe. Operating inside singletons should also be kept thread-safe.
-
-
-
-2. [**Compulsory**] Thread resources must be provided through the thread pool, and it is not allowed to explicitly create threads in the application.
-
-
-
-3. SimpleDateFormat is a thread-unsafe class. It is recommended to use the DataUtils utility class.
-
-
-
-4. [**Compulsory**] At high concurrency, synchronous calls should consider the performance cost of locking. If you can use lockless data structures, don't use locks. If you can lock blocks, don't lock the whole method body. If you can use object locks, don't use class locks.
-
-
-
-5. [**Compulsory**] Use ThreadLocal as less as possible. Everytime using ThreadLocal and it holds an object which needs to be closed, remember to close it to release.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Catch.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Catch.md
deleted file mode 100644
index b1a0030..0000000
--- a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Catch.md
+++ /dev/null
@@ -1,9 +0,0 @@
-1. [**Mandatory**] For the exception of each small module, a special exception class should be defined to facilitate the subsequent generation of error codes for users. It is not allowed to throw any RuntimeException or directly throw Exception.
-
-2. Try not to try-catch a large section of code. This is irresponsible. Please distinguish between stable code and non-stable code when catching. Stable code refers to code that will not go wrong anyway. For the catch of unstable code, try to distinguish the exception types as much as possible, and then do the corresponding exception handling.
-
-3. [**Mandatory**] The purpose of catching an exception is to handle it. Don't throw it away without handling it. If you don't want to handle it, please throw the exception to its caller. Note: Do not use e.printStackTrace() under any circumstances! The outermost business users must deal with exceptions and turn them into content that users can understand.
-
-4. The finally block must close the resource object and the stream object, and try-catch if there is an exception.
-
-5. [**Mandatory**] Prevent NullPointerException. The return value of the method can be null, and it is not mandatory to return an empty collection, or an empty object, etc., but a comment must be added to fully explain under what circumstances the null value will be returned. RPC and SpringCloud Feign calls all require non-empty judgments.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Throws.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Throws.md
deleted file mode 100644
index ac8ed72..0000000
--- a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Throws.md
+++ /dev/null
@@ -1,52 +0,0 @@
-## How to define a new exception?
-
-
-
-- Customized exceptions must inherit one of LinkisretryException, WarnException, ErroException, or FatalException
-
-
-
-- Customized exceptions must contain error codes and error descriptions. If necessary, the IP address and process port where the exception occurred can also be encapsulated in the exception
-
-
-
-- Be careful with WarnException! An exception thrown by WarnException, if caught in a RESTful or RPC Receiver, does not throw a failure to the front end or sender, but only returns a warning message!
-
-
-
-- WarnException has an exception level of 1, ErroException has an exception level of 2, FatalException has an exception level of 3, and LinkisretryException has an exception level of 4
-
-
-
-| exception class| service |  error code  | error description|
-|:----  |:---   |:---   |:---   |
-| LinkisException | common | None | top level parent class inherited from the Exception, does not allow direct inheritance |
-| LinkisRuntimeException | common | None | top level parent class, inherited from RuntimeException, does not allow direct inheritance
-| WarnException | common | None | secondary level parent classes, inherit from LinkisRuntimeException. Warn level exception, must inherit this class directly or indirectly |
-| ErrorException | common | None | secondary level parent classes, inherited from LinkisException. Error exception, must inherit this class directly or indirectly |
-| FatalException | common | None | secondary level parent classes, inherited from LinkisException. Fatal level exception, must inherit this class directly or indirectly |
-| LinkisRetryException | common | None | secondary level parent classes, inherited from LinkisException. Retryable exceptions, must inherit this class directly or indirectly |
-
-
-
-## Module exception specification
-
-
-
-linkis-commons:10000-11000
-
-linkis-computattion-governace:11000-12000
-
-linkis-engineconn-plugins:12000-13000
-
-linkis-orchestrator:13000-14000
-
-linkis-public-enghancements:14000-15000
-
-linkis-spring-cloud-service:15100-15500
-
-linkis-extensions:15500-16000
-
-linkis-tuning:16100-16200
-
-linkis-user-control:16200-16300
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Log.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Log.md
deleted file mode 100644
index 34801bd..0000000
--- a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Log.md
+++ /dev/null
@@ -1,13 +0,0 @@
-1.	[**Convention**] Linkis chooses SLF4J and Log4J2 as the log printing framework, removing the logback in the Spring-Cloud package. Since SLF4J will randomly select a logging framework for binding, it is necessary to exclude bridging packages such as SLF4J-LOG4J after introducing new Maven packages in the future, otherwise log printing will be a problem. However, if the newly introduced Maven package depends on a package such as Log4J, do not exclude, otherwise the code may run with an error.
-
-2.	[**Configuration**] The log4j2 configuration file is default to log4j2.xml and needs to be placed in the classpath. If springcloud combination is needed, "logging:config:classpath:log4j2-spring.xml"(the location of the configuration file) can be added to application.yml.
-
-3.	[**Compulsory**] The API of the logging system (log4j2, Log4j, Logback) cannot be used directly in the class. For Scala code, force inheritance from Logging traits is required. For Java, use LoggerFactory.GetLogger(getClass).
-
-4.	[**Development Convention**] Since engineConn is started by engineConnManager from the command line, we specify the path of the log configuration file on the command line, and also modify the log configuration during the code execution. In particular, redirect the engineConn log to the system's standard out. So the log configuration file for the EngineConn convention is defined in the EnginePlugin and named log4j2-engineConn.xml (this is the convention name and cannot be changed).
-
-5.	[**Compulsory**] Strictly differentiate log levels. Fatal logs should be thrown and exited using System.out(-1) when the SpringCloud application is initialized. Error-level exceptions are those that developers must care about and handle. Do not use them casually. The WARN level is the logs of user action exceptions and some logs to troubleshoot bugs later. INFO is the key process log. Debug is a mode log, write as little as possible.
-
-6.	[**Compulsory**] Requirements: Every module must have INFO level log; Every key process must have INFO level log. The daemon thread must have a WARN level log to clean up resources, etc.
-
-7.	[**Compulsory**] Exception information should include two types of information: crime scene information and exception stack information. If not, then throw it by keyword. Example: logger.error(Parameters/Objects.toString + "_" + e.getMessage(), e);
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Path_Usage.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Path_Usage.md
deleted file mode 100644
index b9c17d3..0000000
--- a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Path_Usage.md
+++ /dev/null
@@ -1,15 +0,0 @@
-Please note: Linkis provides a unified Storage module, so you must follow the Linkis path specification when using the path or configuring the path in the configuration file.
-
-
-
-1. [**Compulsory**]When using a file path, whether it is local, HDFS, or HTTP, the schema information must be included. Among them:
-
-    - The Scheme header for local file is: file:///;
-
-    - The Scheme header for HDFS is: hdfs:///;
-
-    - The Scheme header for HTTP is: http:///.
-
-
-
-2. There should be no special characters in the path. Try to use the combination of English characters, underline and numbers.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/README.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/README.md
deleted file mode 100644
index bde3f2d..0000000
--- a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/README.md
+++ /dev/null
@@ -1,9 +0,0 @@
-In order to standardize Linkis's community development environment, improve the output quality of subsequent development iterations of Linkis, and standardize the entire development and design process of Linkis, it is strongly recommended that Contributors follow the following development specifications:
-- [Exception Handling Specification](./Exception_Catch.md)
-- [Throwing exception specification](./Exception_Throws.md)
-- [Interface Specification](./Development_Specification/API.md)
-- [Log constraint specification](./Development_Specification/Log.md)
-- [Concurrency Specification](./Concurrent.md)
-- [Path Specification](./Path_Usage.md)
-
-**Note**: The development specifications of the initial version of Linkis1.0 are relatively brief, and will continue to be supplemented and improved with the iteration of Linkis. Contributors are welcome to provide their own opinions and comments.
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compilation_Document.md b/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compilation_Document.md
deleted file mode 100644
index ee8b1c6..0000000
--- a/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compilation_Document.md
+++ /dev/null
@@ -1,135 +0,0 @@
-# Linkis compilation document
-
-## Directory
-
-- 1. How to compile the whole project of Linkis.
-- 2. How to compile a module.
-- 3. How to compile an engine.
-- 4. How to modify the version of Hadoop, Hive and Spark that Linkis depends on.
-
-## 1. Compile the whole project
-
-Environment requirements: The version of JDK must be **higher than JDK8**, both **Oracle/Sun** and **OpenJDK** are supported.
-
-After cloning the project from github, please use maven to compile the project. 
-
-**Please note**: We recommend you using Hadoop-2.7.2, Hive-1.2.1, Spark-2.4.3 and Scala-2.11.8 to compile the Linkis.
-
-If you want to use other version of Hadoop, Hive and Spark, please refer to: How to modify the version of Hadoop, Hive and Spark that Linkis depends on.
-
-(1) **If you are compiling the Linkis on your local machine for the first time, you must execute the following commands on the root directory beforehand:**
-
-```bash
-    cd wedatasphere-linkis-x.x.x
-    mvn -N  install
-```
-
-(2) Execute the following commands on the root directory:
-
-```bash
-    cd wedatasphere-linkis-x.x.x
-    mvn clean install
-```
-
-(3) Obtain installation package from the directory 'assembly-> target':
-
-```bash
-    ls wedatasphere-linkis-x.x.x/assembly/target/wedatasphere-linkis-x.x.x-dist.tar.gz
-```
-
-## 2. Compile a module
-
-After cloning project from github, please use maven to compile the project. 
-
-(1) **If you are compiling the Linkis on your local machine for the first time, you must execute the following commands on the root directory beforehand:**
-
-```bash
-    cd wedatasphere-linkis-x.x.x
-    mvn -N  install
-```
-
-(2) Switch to the corresponding module to compile. An example of compiling Entrance module is shown below.
-
-```bash   
-    cd wedatasphere-linkis-x.x.x/linkis-computation-governance/linkis-entrance
-    mvn clean install
-```
-
-(3) Obtain compiled installation package from 'target' directory in the corresponding module.
-
-```
-    ls wedatasphere-linkis-x.x.x/linkis-computation-governance/linkis-entrance/target/linkis-entrance.x.x.x.jar
-```
-
-## 3. Compile an engine
-
-An example of compiling the Spark engine is shown below:
-
-(1) **If you are compiling the Linkis on your local machine for the first time, you must execute the following commands on the root directory beforehand:**
-
-```bash
-    cd wedatasphere-linkis-x.x.x
-    mvn -N  install
-```
-
-(2) Switch to the directory where Spark engine locates and use the following commands to compile:
-
-```bash   
-    cd wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark
-    mvn clean install
-```
-
-(3) Obtained compiled installation package from 'target' directory in the corresponding module.
-
-```
-    ls wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark/target/linkis-engineplugin-spark-x.x.x.zip
-```
-
-How to install Spark engine separately? Please refer to Linkis EngineConnPlugin installation document.
-
-## 4. How to modify the version of Hadoop, Hive and Spark that Linkis depends on
-
-Please note: Since Hadoop is a fundamental service in big data area, Linkis must rely on it for compilation, while computing storage engines such as Spark and Hive are not. If you do not have requirements for a certain engine, then no need to set its engine version or compile its EngineConnPlugin.
-
-The way to modify the version of Hadoop is different from that of Spark, Hive and other computation engines. Please see instructions below:
-
-#### How to modify the version of Hadoop that Linkis relies on?
-
-Enter the root directory of the Linkis and manually modified the Hadoop version in pom.xml.
-
-```bash
-    cd wedatasphere-linkis-x.x.x
-    vim pom.xml
-```
-
-```xml
-    <properties>
-      
-        <hadoop.version>2.7.2</hadoop.version> <!--> Modify Hadoop version here <-->
-              
-        <scala.version>2.11.8</scala.version>
-        <jdk.compile.version>1.8</jdk.compile.version>
-              
-    </properties>
-```
-
-#### How to modify the version of Spark, Hive that Linkis relies on?
-
-Here is an example of modifying Spark version. Enter the directory where Spark engine locates and manually modify the Spark version in pom.xml.
-
-```bash
-    cd wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark
-    vim pom.xml
-```
-
-```xml
-    <properties>
-      
-        <spark.version>2.4.3</spark.version>  <!--> Modify Spark version here <-->
-              
-    </properties>
-```
-
-Modifying  the version of other engines is similar to that of Spark. Enter the directory where  engine locates and manually modify the version in pom.xml.
-
-Then, please refer to How to compile an engine.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compile_and_Package.md b/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compile_and_Package.md
deleted file mode 100644
index 52928bf..0000000
--- a/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compile_and_Package.md
+++ /dev/null
@@ -1,155 +0,0 @@
-# Linkis Compilation Document
-
-## directory
-
-- [1. Fully compile Linkis](#1-Fully-compile-Linkis)
-
-- [2. Build a single module](#2-Build-a-single-module)
-
-- [3. Build an engine](#3-Build-an-engine)
-
-- [4. How to Modify Linkis dependency versions of Hadoop, Hive, Spark](#4-How-to-Modify-Linkis-dependency-versions-of-Hadoop,-Hive,-Spark)
-
-## 1. Fully compile Linkis
-
-**Environment requirements:** Version of JDK must be higher then **JDK8**,  **Oracle/Sun** and **OpenJDK** are both supported.
-
-After getting the project code from Git, compile the project installation package using Maven.
-
-**Notice** : The official recommended versions for compiling Linkis are hadoop-2.7.2, hive-1.2.1, spark-2.4.3, and Scala-2.11.8.
-
-If you want to compile Linkis with another version of Hadoop, Hive, Spark, please refer to: [How to Modify Linkis dependency of Hadoop, Hive, Spark](#4 How to Modify Linkis dependency versionof Hadoop, Hive, Spark)
-
-(1) **If you compile it locally for the first time, you must execute the following command ** in the source package root directory of Linkis:**
-
-```bash
-cd wedatasphere-linkis-x.x.x
-mvn -N  install
-```
-
-(2) Execute the following command in the source package root directory of Linkis:
-
-```bash
-cd wedatasphere-linkis-x.x.x
-mvn clean install
-```
-
-(3) Get the installation package, in the project assembly->target directory:
-
-```bash
-ls wedatasphere-linkis-x.x.x/assembly/target/wedatasphere-linkis-x.x.x-dist.tar.gz
-```
-
-## 2. Compile a single module
-
-After getting the project code from Git, use Maven to package the project installation package.
-
-(1) **If you use it locally for the first time, you must execute the following command** in the source package root directory of Linkis:
-
-```bash
-cd wedatasphere-linkis-x.x.x
-mvn -N  install
-```
-
-(2) Go to the corresponding module for compilation. For example, if you want to recompile the Entrance, command as follows:
-
-```bash
-cd wedatasphere-linkis-x.x.x/linkis-computation-governance/linkis-entrance
-mvn clean install
-```
-
-(3) Get the installation package. The compiled package will be found in the ->target directory of the corresponding module:
-
-```
-ls wedatasphere-linkis-x.x.x/linkis-computation-governance/linkis-entrance/target/linkis-entrance.x.x.x.jar
-```
-
-## 3. Build an engine
-
-Here's an example of the Spark engine that builds Linkis:
-
-(1) **If you use it locally for the first time, you must execute the following command** in the source package root directory of Linkis:
-
-```bash
-cd wedatasphere-linkis-x.x.x
-mvn -N  install
-```
-
-(2) Jump to the directory where the Spark engine is located for compilation and packaging. The command is as follows:
-
-```bash
-cd wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark
-mvn clean install
-```
-
-(3) Get the installation package. The compiled package will be found in the ->target directory of the corresponding module:
-
-```
-ls  wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark/target/linkis-engineplugin-spark-x.x.x.zip
-```
-
-How do I install the Spark engine separately? Please refer to [Linkis Engine Plug-in Installation Documentation](.. / Deployment_Documents EngineConnPlugin engine plug-in installation document. Md)
-
-## 4. How to Modify Linkis dependency versions of Hadoop, Hive, Spark
-
-Please note: Hadoop is a big data basic service, Linkis must rely on Hadoop for compilation;
-If you don't want to use an engine, you don't need to set the version of the engine or compile the engine plug-in.
-
-Specifically, the version of Hadoop can be modified in a different way than Spark, Hive, and other computing engines, as described below:
-
-#### How do I modify the version of Hadoop that Linkis relies on?
-
-Enter the source package root directory of Linkis, and manually modify the Hadoop version information of the pom.xml file, as follows:
-
-```bash
-cd wedatasphere-linkis-x.x.x
-vim pom.xml
-```
-
-```xml
-<properties>
-    <hadoop.version>2.7.2</hadoop.version> <!--Change version of hadoop here-->
-    <scala.version>2.11.8</scala.version>
-    <jdk.compile.version>1.8</jdk.compile.version>
- </properties>
-
-```
-
-**Please note: If your hadoop version is hadoop3, you need to modify the pom file of linkis-hadoop-common**
-Because under hadoop2.8, hdfs-related classes are in the hadoop-hdfs module, but in hadoop 3.X the corresponding classes are moved to the module hadoop-hdfs-client, you need to modify this file:
-
-```
-pom:Linkis/linkis-commons/linkis-hadoop-common/pom.xml
-Modify the dependency hadoop-hdfs to hadoop-hdfs-client:
-  <dependency>
-             <groupId>org.apache.hadoop</groupId>
-             <artifactId>hadoop-hdfs</artifactId> <!-- Replace this line with <artifactId>hadoop-hdfs-client</artifactId>-->
-             <version>${hadoop.version}</version>
-             ...
-  Modify hadoop-hdfs to:
-   <dependency>
-             <groupId>org.apache.hadoop</groupId>
-             <artifactId>hadoop-hdfs-client</artifactId>
-             <version>${hadoop.version}</version>
-             ...
-```
-
-#### How to modify Spark, Hive versions that Linkis relies on?
-
-Here's an example of changing the version of Spark. Go to the directory where the Spark engine is located and manually modify the Spark version information of the pom.xml file as follows:
-
-```bash
-cd wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark
-vim pom.xml
-```
-
-```xml
-<properties>
-    <spark.version>2.4.3</spark.version> <!-- Change the Spark version number here -->
- </properties>
-
-```
-
-Modifying the version of another engine is similar to changing the Spark version by going to the directory where the engine is located and manually changing the engine version information in the pom.xml file.
-
-Then refer to  [Build an engine](#3 Build an engine).
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Linkis_DEBUG.md b/Linkis-Doc-master/en_US/Development_Documents/Linkis_DEBUG.md
deleted file mode 100644
index 34e1a88..0000000
--- a/Linkis-Doc-master/en_US/Development_Documents/Linkis_DEBUG.md
+++ /dev/null
@@ -1,141 +0,0 @@
-## 1 Preface
-&nbsp; &nbsp; &nbsp; &nbsp; Every Linkis micro service supports debugging, most of them support local debugging, some of them only support remote debugging.
-
-1. Services that support local debugging
-- linkis-mg-eureka: set of debugging Main class is `com.webank.Wedatasphere.Linkis.Eureka.SpringCloudEurekaApplication`
-- Other Linkis microservices have their own Main classes, as shown below
-linkis-cg-manager: `com.webank.wedatasphere.linkis.manager.am.LinkisManagerApplication`
-linkis-ps-bml: `com.webank.wedatasphere.linkis.bml.LinkisBMLApplication`
-linkis-ps-cs: `com.webank.wedatasphere.linkis.cs.server.LinkisCSApplication`
-linkis-cg-engineconnmanager: `com.webank.wedatasphere.linkis.ecm.server.LinkisECMApplication`
-linkis-cg-engineplugin: `com.webank.wedatasphere.linkis.engineplugin.server.LinkisEngineConnPluginServer`
-linkis-cg-entrance: `com.webank.wedatasphere.linkis.entrance.LinkisEntranceApplication`
-linkis-ps-publicservice: `com.webank.wedatasphere.linkis.jobhistory.LinkisPublicServiceAppp`
-linkis-ps-datasource: `com.webank.wedatasphere.linkis.metadata.LinkisDataSourceApplication`
-linkis-mg-gateway: `com.webank.wedatasphere.linkis.gateway.springcloud.LinkisGatewayApplication`
-
-2. Services that only support remote debugging:
-The EngineConnManager service and the Engine service started by ECM only support remote debugging.
-
-## 2. Local debugging service steps
-&nbsp; &nbsp; &nbsp; &nbsp; Linkis and DSS both rely on Eureka for their services, so you need to start the Eureka service first. The Eureka service can also use the Eureka that you have already started. Once Eureka is started, you can start other services.
-
-2.1 Eureka service start
-1. If you do not want the default port 20303, you can modify the port configuration:
-
-```yml
-File path: conf/application-eureka.yml
-Port to be modified in config file:
-
-server:
-    Port: 8080 # Port to setup
-```
-
-2. Then to add debug configuration in IDEA
-
-You can do this by clicking Run or by clicking Add Configuration in the image below
-
-![01](../Images/Tunning_and_Troubleshooting/debug-01.png)
-
-3. Then click Add Application and modify the information
-
-- Set the debug name first: Eureka, for example
-- Then set the Main class:
-`com.webank.wedatasphere.linkis.eureka.SpringCloudEurekaApplication`
-- Finally, set the Class Path for the service. For Eureka, the classPath module is linkis-eureka
-
-![02](../Images/Tunning_and_Troubleshooting/debug-02.png)
-
-4. Click the Debug button to start the Eureka service and access the Eureka page through [http://localhost:8080/](at)
-
-![03](.. /Images/Tunning_and_Troubleshooting/debug-03.png)
-
-2.2 Other services
-
-1. The Eureka configuration of the corresponding service needs to be modified. The Application.yml file needs to be modified
-
-```
-    conf/application-linkis.yml
-```
-Change the corresponding Eureka address to the Eureka service that has been started:
-
-```
-    eureka:
-    client:
-    serviceUrl:
-    defaultZone: http://localhost:8080/eureka/
-```
-
-2. Modify the configuration related to Linkis. The general configuration file is in conf/linkis.properties, and the corresponding configuration of each module is in the properties file beginning with the module name in conf directory.
-
-3. Then add debugging service
-
-The Main Class is uniformly set to its own Main Class for each module, which is listed in the foreword.
-The Class Path of the service is the corresponding module:
-
-```
-linkis-cg-manager: linkis-application-manager
-linkis-ps-bml: linkis-bml
-linkis-ps-cs: `com.webank.wedatasphere.linkis.cs.server.LinkisCSApplication`
-linkis-cg-engineconnmanager: linkis-cs-server
-linkis-cg-engineplugin: linkis-engineconn-plugin-server
-linkis-cg-entrance: linkis-entrance
-linkis-ps-publicservice: linkis-jobhistory
-linkis-ps-datasource: linkis-metadata
-linkis-mg-gateway: linkis-spring-cloud-gateway
-```
-
-And check provide:
-
-![06](../Images/Tunning_and_Troubleshooting/debug-06.png)
-
-4. Then start the service and you can see that the service is registered on the Eureka page:
-
-![05](../Images/Tunning_and_Troubleshooting/debug-05.png)
-
-Linkis-PS-PublicService should add a public-module Module to the POM.
-
-```
-<dependency>
-    <groupId>com.webank.wedatasphere.linkis</groupId>
-    <artifactId>public-module</artifactId>
-    <version>${linkis.version}</version>
-</dependency>
-```
-
-## 3. Steps of remote debugging service
-&nbsp; &nbsp; &nbsp; &nbsp; Each service supports remote debugging, but you need to turn it on ahead of time. There are two types of remote debugging, one is the remote debugging of Linkis common service, and the other is the remote debugging of EngineConn, which are described as follows:
-
-1. Remote debugging of common service:
-
-A. First, modify the startup script file of the corresponding service under sbin/ext directory, and add debug port:
-
-```
-export $SERVER_JAVA_OPTS =" -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=10092"
-```
-
-Added: '-agentlib: JDWP = Transport = DT_Socket, Server = Y, Suspend = N, Address =10092' where ports may conflict and can be changed to available ports.
-
-B. Create a new remote debug in IDEA. Select Remote first, then add host and port for the service, and then select the debug module
-
-![07](../Images/Tunning_and_Troubleshooting/debug-07.png)
-
-3. Then click the Debug button to complete the remote debugging
-
-![08](../Images/Tunning_and_Troubleshooting/debug-08.png)
-
-2. Remote debugging of engineConn:
-
-A. Add the following configuration items to the linkis-engineconn.properties file corresponding to EngineConn
-```
-wds.linkis.engineconn.debug.enable=true
-```
-
-This configuration item will randomly assign a debug port when engineConn starts.
-
-B. In the first line of the engineConn log, the actual assigned port is printed.
-```
-      Listening for transport dt_socket at address: 26072
-```
-
-C. Create a new remote debug in IDEA. The steps have been described in the previous section and will not be repeated here.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/New_EngineConn_Development.md b/Linkis-Doc-master/en_US/Development_Documents/New_EngineConn_Development.md
deleted file mode 100644
index d45eedd..0000000
--- a/Linkis-Doc-master/en_US/Development_Documents/New_EngineConn_Development.md
+++ /dev/null
@@ -1,77 +0,0 @@
-## How To Quickly Implement A New Engine
-
-To implement a new engine is to implement a new "EngineConnPlugin(ECP)" means engine plugin. Specific steps are as follows: 
-
-1.Create a new maven module and introduce the maven dependency of "ECP":
-```
-<dependency>
-<groupId>com.webank.wedatasphere.linkis</groupId>
-<artifactId>linkis-engineconn-plugin-core</artifactId>
-<version>${linkis.version}</version>
-</dependency>
-```
-2.The main interfaces of implementing "ECP":
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;a)EngineConnPlugin, when starting "EngineConn", first find the corresponding "EngineConnPlugin" class, and use this as the entry point to obtain the implementation of other core interfaces, which is the main interface that must be implemented.
-    
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b)EngineConnFactory, which implements the logic of how to start an engine connector and how to start an engine executor, is an interface that must be implemented.
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b.a Implement the "createEngineConn" method: return an "EngineConn" object, where "getEngine" returns an object that encapsulates the connection information with the underlying engine, and also contains Engine type information.
-    
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b.b For engines that only support a single computing scenario, inherit "SingleExecutorEngineConnFactory" class and implement "createExecutor" method which returns the corresponding Executor.
-    
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b.c For engines that support multiple computing scenarios, you need to inherit "MultiExecutorEngineConnFactory" and implement an ExecutorFactory for each computing type. "EngineConnPlugin" will obtain all ExecutorFactory through reflection and return the corresponding Executor according to the actual situation.
-    
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;c)EngineConnResourceFactory, it is used to limit the resources required to start an engine. Before the engine starts, it will use this as the basis to apply for resources from the "Linkis Manager". Not required, "GenericEngineResourceFactory" can be used by default.
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;d)EngineLaunchBuilder, it is used to encapsulate the necessary information that "EngineConnManager" can parse into the startup command. Not necessary, you can directly inherit "JavaProcessEngineConnLaunchBuilder".
-
-3.Implement Executor. As a real computing scene executor, Executor is the actual computing logic execution unit. It also abstracts various specific capabilities of the engine and provides various services such as locking, accessing status and obtaining logs. According to actual needs, Linkis provides the following derived Executor base classes by default. The class names and main functions are as follows:
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;a) SensibleExecutor: 
-       
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; i. Executor has multiple states, allowing Executor to switch states.
-         
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ii. After the Executor switches the state, operations such as notifications are allowed. 
-         
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b) YarnExecutor: refers to the Yarn type engine, which can obtain the "applicationId", "applicationURL" and queue。
-       
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;c) ResourceExecutor: refers to the engine's ability to dynamically change resources and cooperate with the "requestExpectedResource" method to apply to RM for new resources each time you want to change resources; And the "resourceUpdate" method is used to request new resources from RM each time the actual resource used by the engine changes:
-       
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;d) AccessibleExecutor: is a very important Executor base class. If the user's Executor inherits the base class, it means that the Engine can be accessed. Here we need to distinguish between "SensibleExecutor"'s "state" method and "AccessibleExecutor"'s "getEngineStatus" method. "state" method is used to get the engine status, and "getEngineStatus" is used to get the basic indicator metric data such as engine status, load and concurrency.
-       
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;e) At the same time, if AccessibleExecutor is inherited, it will trigger the Engine process to instantiate multiple "EngineReceiver" methods. "EngineReceiver" is used to process RPC requests from Entrance, EM and "LinkisMaster", marking the engine an accessible engine. If users have special RPC requirements, they can communicate with "AccessibleExecutor" by implementing the "RPCService" interface. 
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;f) ExecutableExecutor: it is a resident Executor base class. The resident Executor includes: Streaming applications in the production center, steps specified to run in independent mode after submission to "Schedulis", business applications of business users, etc.
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;g) StreamingExecutor: inherited from "ExecutiveExecutor", it needs the ability to diagnose, do checkpoint, collect job information and monitor alarms.
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;h) ComputationExecutor: it is a commonly used interactive engine Executor which handles interactive execution tasks and has interactive capabilities such as status query ad task killing.
-
-             
-## Actual Case         
-The following will take the Hive engine as case to illustrate the implementation of each interface. The following figure is what is needed to implement a Hive engine All core classes implemented.
-
-Hive engine is an interactive engine, so when implementing Executor, it inherits "ComputationExecutor" and introduces the following maven dependencies: 
-
-``` 
-<dependency>
-<groupId>com.webank.wedatasphere.linkis</groupId>
-<artifactId>linkis-computation-engineconn</artifactId>
-<version>${linkis.version}</version>
-</dependency>
-```
-             
-As a subclass of "ComputationExecutor", "HiveEngineConnExecutor" implements the "executeLine" method. This method receives a line of execution statements. After calling the Hive interface for execution, it returns different "ExecuteResponse" to indicate success or failure. At the same time, in this method, through the interface provided in the "engineExecutorContext", the result set, log and progress transmission are realized. 
-
-The Hive engine only needs to execute the HQL Executor, which is a single executor engine. Therefore, when defining "HiveEngineConnFactory", it inherits "SingleExecutorEngineConnFactory" which implements the following two interfaces: 
-a) createEngineConn: creates a object that contains "UserGroupInformation", "SessionState" adn "HiveConf" as an encapsulation of the connection information with the underlying engine, set to the EngineConn object to return.
-b) createExecutor: creates a "HiveEngineConnExecutor" executor object based on the current engine connection information.
-
-Hive engine is an ordinary Java process, so when implementing "EngineConnLaunchBuilder", it directly inherits "JavaProcessEngineConnLaunchBuilder". Like memory size, Java parameters and classPath, it can be adjusted through configuration, please refer to "EnvConfiguration" class for details.
-
-Hive engine uses "LoadInstanceResource resources", so there is no need to implement "EngineResourceFactory", directly use the default "GenericEngineResourceFactory", adjust the number of resources through configuration, refer to "EngineConnPluginConf" class for details.
-
-Implement "HiveEngineConnPlugin" and provide methods for creating the above implementation classes.
-
-
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Hive_User_Manual.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Hive_User_Manual.md
deleted file mode 100644
index 8262706..0000000
--- a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Hive_User_Manual.md
+++ /dev/null
@@ -1,81 +0,0 @@
-# Hive engine usage documentation
-
-This article mainly introduces the configuration, deployment and use of Hive engine in Linkis1.0.
-
-## 1. Environment configuration before Hive engine use
-
-If you want to use the hive engine on your server, you need to ensure that the following environment variables have been set correctly and that the user who started the engine has these environment variables.
-
-It is strongly recommended that you check these environment variables of the executing user before executing hive tasks.
-
-| Environment variable name | Environment variable content | Remarks |
-|-----------------|----------------|------|
-| JAVA_HOME | JDK installation path | Required |
-| HADOOP_HOME | Hadoop installation path | Required |
-| HADOOP_CONF_DIR | Hadoop configuration path | Required |
-| HIVE_CONF_DIR | Hive configuration path | Required |
-
-Table 1-1 Environmental configuration list
-
-## 2. Hive engine configuration and deployment
-
-### 2.1 Hive version selection and compilation
-
-The version of Hive supports hive1.x and hive2.x, the default is to support hive on MapReduce, if you want to change to Hive
-on Tez, you need to make some changes in accordance with this pr.
-
-<https://github.com/WeBankFinTech/Linkis/pull/541>
-
-The hive version supported by default is 1.2.1. If you want to modify the hive version, such as 2.3.3, you can find the linkis-engineplugin-hive module and change the \<hive.version\> tag to 2.3 .3, then compile this module separately
-
-### 2.2 hive engineConn deployment and loading
-
-If you have already compiled your hive engine plug-in has been compiled, then you need to put the new plug-in in the specified location to load, you can refer to the following article for details
-
-https://github.com/WeBankFinTech/Linkis/wiki/EngineConnPlugin%E5%BC%95%E6%93%8E%E6%8F%92%E4%BB%B6%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3
-
-### 2.3 Hive engine tags
-
-Linkis1.0 is done through tags, so we need to insert data in our database, the way of inserting is shown below.
-
-https://github.com/WeBankFinTech/Linkis/wiki/EngineConnPlugin%E5%BC%95%E6%93%8E%E6%8F%92%E4%BB%B6%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3#22-%E7%AE%A1%E7%90%86%E5%8F%B0configuration%E9%85%8D%E7%BD%AE%E4%BF%AE%E6%94%B9%E5%8F%AF%E9%80%89
-
-## 3. Use of hive engine
-
-### Preparation for operation, queue setting
-
-Hive's MapReduce task requires yarn resources, so you need to set up the queue at the beginning
-
-![](../Images/EngineUsage/queue-set.png)
-
-Figure 3-1 Queue settings
-
-### 3.1 How to use Scriptis
-
-The use of Scriptis is the simplest. You can directly enter Scriptis, right-click the directory and create a new hive script and write hivesql code.
-
-The implementation of the hive engine is by instantiating the driver instance of hive, and then the driver submits the task, and obtains the result set and displays it.
-
-![](../Images/EngineUsage/hive-run.png)
-
-Figure 3-2 Screenshot of the execution effect of hivesql
-
-### 3.2 How to use workflow
-
-DSS workflow also has a hive node, you can drag in the workflow node, then double-click to enter and edit the code, and then execute it in the form of a workflow.
-
-![](../Images/EngineUsage/workflow.png)
-
-Figure 3-5 The node where the workflow executes hive
-
-### 3.3 How to use Linkis Client
-
-Linkis also provides a client method to call hive tasks. The call method is through the SDK provided by LinkisClient. We provide java and scala two ways to call, the specific usage can refer to <https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4 %BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>.
-
-## 4. Hive engine user settings
-
-In addition to the above engine configuration, users can also make custom settings, including the memory size of the hive Driver process, etc.
-
-![](../Images/EngineUsage/hive-config.png)
-
-Figure 4-1 User-defined configuration management console of hive
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/JDBC_User_Manual.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/JDBC_User_Manual.md
deleted file mode 100644
index 35f3d7b..0000000
--- a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/JDBC_User_Manual.md
+++ /dev/null
@@ -1,53 +0,0 @@
-# JDBC engine usage documentation
-
-This article mainly introduces the configuration, deployment and use of JDBC engine in Linkis1.0.
-
-## 1. Environment configuration before using the JDBC engine
-
-If you want to use the JDBC engine on your server, you need to prepare the JDBC connection information, such as the connection address, user name and password of the MySQL database, etc.
-
-## 2. JDBC engine configuration and deployment
-
-### 2.1 JDBC version selection and compilation
-
-The JDBC engine does not need to be compiled by the user, and the compiled JDBC engine plug-in package can be used directly. Drivers that have been provided include MySQL, PostgreSQL, etc.
-
-### 2.2 JDBC engineConn deployment and loading
-
-Here you can use the default loading method to use it normally, just install it according to the standard version.
-
-### 2.3 JDBC engine tags
-
-Here you can use the default dml.sql to insert it and it can be used normally.
-
-## 3. The use of JDBC engine
-
-### Ready to operate
-
-You need to configure JDBC connection information, including connection address information and user name and password.
-
-![](../Images/EngineUsage/jdbc-conf.png)
-
-Figure 3-1 JDBC configuration information
-
-### 3.1 How to use Scriptis
-
-The way to use Scriptis is the simplest. You can go directly to Scriptis, right-click the directory and create a new JDBC script, write JDBC code and click Execute.
-
-The execution principle of JDBC is to load the JDBC Driver and submit sql to the SQL server for execution and obtain the result set and return.
-
-![](../Images/EngineUsage/jdbc-run.png)
-
-Figure 3-2 Screenshot of the execution effect of JDBC
-
-### 3.2 How to use workflow
-
-DSS workflow also has a JDBC node, you can drag into the workflow node, then double-click to enter and edit the code, and then execute it in the form of a workflow.
-
-### 3.3 How to use Linkis Client
-
-Linkis also provides a client way to call JDBC tasks, the way to call is through the SDK provided by LinkisClient. We provide java and scala two ways to call, the specific usage can refer to <https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4 %BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>.
-
-## 4. JDBC engine user settings
-
-JDBC user settings are mainly JDBC connection information, but it is recommended that users encrypt and manage this password and other information.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Python_User_Manual.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Python_User_Manual.md
deleted file mode 100644
index 64724e9..0000000
--- a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Python_User_Manual.md
+++ /dev/null
@@ -1,61 +0,0 @@
-# Python engine usage documentation
-
-This article mainly introduces the configuration, deployment and use of the Python engine in Linkis1.0.
-
-## 1. Environment configuration before using Python engine
-
-If you want to use the python engine on your server, you need to ensure that the python execution directory and execution permissions are in the user's PATH.
-
-| Environment variable name | Environment variable content | Remarks |
-|------------|-----------------|--------------------------------|
-| python | python execution environment | Anaconda's python executor is recommended |
-
-Table 1-1 Environmental configuration list
-
-## 2. Python engine configuration and deployment
-
-### 2.1 Python version selection and compilation
-
-Python supports python2 and
-For python3, you can simply change the configuration to complete the Python version switch, without recompiling the python engine version.
-
-### 2.2 python engineConn deployment and loading
-
-Here you can use the default loading method to be used normally.
-
-### 2.3 tags of python engine
-
-Here you can use the default dml.sql to insert it and it can be used normally.
-
-## 3. Use of Python engine
-
-### Ready to operate
-
-Before submitting python on linkis, you only need to make sure that there is python path in your user's PATH.
-
-### 3.1 How to use Scriptis
-
-The way to use Scriptis is the simplest. You can directly enter Scriptis, right-click the directory and create a new python script, write python code and click Execute.
-
-The execution logic of python is to start a python through Py4j
-Gateway, and then the Python engine submits the code to the python executor for execution.
-
-![](../Images/EngineUsage/python-run.png)
-
-Figure 3-1 Screenshot of the execution effect of python
-
-### 3.2 How to use workflow
-
-The DSS workflow also has a python node, you can drag into the workflow node, then double-click to enter and edit the code, and then execute it in the form of a workflow.
-
-### 3.3 How to use Linkis Client
-
-Linkis also provides a client method to call spark tasks, the call method is through the SDK provided by LinkisClient. We provide java and scala two ways to call, the specific usage can refer to <https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4 %BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>.
-
-## 4. Python engine user settings
-
-In addition to the above engine configuration, users can also make custom settings, such as the version of python and some modules that python needs to load.
-
-![](../Images/EngineUsage/jdbc-conf.png)
-
-Figure 4-1 User-defined configuration management console of python
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/README.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/README.md
deleted file mode 100644
index cb9e5ef..0000000
--- a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/README.md
+++ /dev/null
@@ -1,25 +0,0 @@
-## 1 Overview
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis, as a powerful computing middleware, can easily interface with different computing engines. By shielding the usage details of different computing engines, it provides a The unified use interface greatly reduces the operation and maintenance cost of deploying and applying Linkis's big data platform. At present, Linkis has docked several mainstream computing engines, which basically cover the data requirements in production, in order t [...]
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The engine is a component that provides users with data processing and analysis capabilities. Currently, it has been connected to Linkis's engine, including mainstream big data computing engines Spark, Hive, Presto, etc. , There are also engines with the ability to process data in scripts such as python and Shell. DataSphereStudio is a one-stop data operation platform docked with Linkis. Users can conveniently use the engine supported by Li [...]
-
-| Engine | Whether to support Scriptis | Whether to support workflow |
-| ---- | ---- | ---- |
-| Spark | Support | Support |
-| Hive | Support | Support |
-| Presto | Support | Support |
-| ElasticSearch | Support | Support |
-| python | support | support |
-| Shell | Support | Support |
-| JDBC | Support | Support |
-| MySQL | Support | Support |
-
-## 2. Document structure
-You can refer to the following documents for the related documents of the engines that have been accessed.
--[Spark Engine Usage Document](./../Engine_Usage_Documentations/Spark_User_Manual.md)
--[Hive Engine Usage Document](./../Engine_Usage_Documentations/Hive_User_Manual.md)
--[Presto Engine Usage Document](./../Engine_Usage_Documentations/Presto_User_Manual.md)
--[ElasticSearch Engine Usage Document](./../Engine_Usage_Documentations/ElasticSearch_User_Manual.md)
--[Python engine usage documentation](./../Engine_Usage_Documentations/Python_User_Manual.md)
--[Shell Engine Usage Document](./../Engine_Usage_Documentations/Shell_User_Manual.md)
--[JDBC Engine Usage Document](./../Engine_Usage_Documentations/JDBC_User_Manual.md)
--[MLSQL Engine Usage Document](./../Engine_Usage_Documentations/MLSQL_User_Manual.md)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Shell_User_Manual.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Shell_User_Manual.md
deleted file mode 100644
index 292d2c4..0000000
--- a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Shell_User_Manual.md
+++ /dev/null
@@ -1,55 +0,0 @@
-# Shell engine usage document
-
-This article mainly introduces the configuration, deployment and use of Shell engine in Linkis1.0
-## 1. The environment configuration before using the Shell engine
-
-If you want to use the shell engine on your server, you need to ensure that the user's PATH has the bash execution directory and execution permissions.
-
-| Environment variable name | Environment variable content | Remarks             |
-|---------------------------|------------------------------|---------------------|
-| sh execution environment  | bash environment variables    | bash is recommended |
-
-Table 1-1 Environmental configuration list
-
-## 2. Shell engine configuration and deployment
-
-### 2.1 Shell version selection and compilation
-
-The shell engine does not need to be compiled by the user, and the compiled shell engine plug-in package can be used directly.
-### 2.2 shell engineConn deployment and loading
-
-Here you can use the default loading method to be used normally.
-
-### 2.3 Labels of the shell engine
-
-Here you can use the default dml.sql to insert it and it can be used normally.
-
-## 3. Use of Shell Engine
-
-### Ready to operate
-
-Before submitting the shell on linkis, you only need to ensure that there is the path of the shell in your user's $PATH.
-
-### 3.1 How to use Scriptis
-
-The use of Scriptis is the simplest. You can directly enter Scriptis, right-click the directory and create a new shell script, write shell code and click Execute.
-
-The execution principle of the shell is that the shell engine starts a system process to execute through the ProcessBuilder that comes with java, and redirects the output of the process to the engine and writes it to the log.
-
-![](../Images/EngineUsage/shell-run.png)
-
-Figure 3-1 Screenshot of shell execution effect
-
-### 3.2 How to use workflow
-
-The DSS workflow also has a shell node. You can drag in the workflow node, then double-click to enter and edit the code, and then execute it in the form of a workflow.
-
-Shell execution needs to pay attention to one point. If the workflow is executed in multiple lines, the success of the workflow node is determined by the last command. For example, the first two lines are wrong, but the shell return value of the last line is 0, then this node Is successful.
-
-### 3.3 How to use Linkis Client
-
-Linkis also provides a client method to call the shell task, the calling method is through the SDK provided by LinkisClient. We provide java and scala two ways to call, the specific usage can refer to <https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4 %BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>.
-
-## 4. Shell engine user settings
-
-The shell engine can generally set the maximum memory of the engine JVM.
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Spark_User_Manual.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Spark_User_Manual.md
deleted file mode 100644
index 9932184..0000000
--- a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Spark_User_Manual.md
+++ /dev/null
@@ -1,91 +0,0 @@
-# Spark engine usage documentation
-
-This article mainly introduces the configuration, deployment and use of spark engine in Linkis1.0.
-
-## 1. Environment configuration before using Spark engine
-
-If you want to use the spark engine on your server, you need to ensure that the following environment variables have been set correctly and that the user who started the engine has these environment variables.
-
-It is strongly recommended that you check these environment variables of the executing user before executing spark tasks.
-
-| Environment variable name | Environment variable content | Remarks |
-|---------------------------|------------------------------|------|
-| JAVA_HOME | JDK installation path | Required |
-| HADOOP_HOME | Hadoop installation path | Required |
-| HADOOP_CONF_DIR | Hadoop configuration path | Required |
-| HIVE\_CONF_DIR | Hive configuration path | Required |
-| SPARK_HOME | Spark installation path | Required |
-| SPARK_CONF_DIR | Spark configuration path | Required |
-| python | python | Anaconda's python is recommended as the default python |
-
-Table 1-1 Environmental configuration list
-
-## 2. Configuration and deployment of Spark engine
-
-### 2.1 Selection and compilation of spark version
-
-In theory, Linkis1.0 supports all versions of spark2.x and above. Spark 2.4.3 is the default supported version. If you want to use your spark version, such as spark2.1.0, you only need to modify the version of the plug-in spark and then compile it. Specifically, you can find the linkis-engineplugin-spark module, change the \<spark.version\> tag to 2.1.0, and then compile this module separately.
-
-### 2.2 spark engineConn deployment and loading
-
-If you have already compiled your spark engine plug-in has been compiled, then you need to put the new plug-in to the specified location to load, you can refer to the following article for details
-
-https://github.com/WeBankFinTech/Linkis/wiki/EngineConnPlugin%E5%BC%95%E6%93%8E%E6%8F%92%E4%BB%B6%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3
-
-### 2.3 tags of spark engine
-
-Linkis1.0 is done through tags, so we need to insert data in our database, the way of inserting is shown below.
-
-https://github.com/WeBankFinTech/Linkis/wiki/EngineConnPlugin%E5%BC%95%E6%93%8E%E6%8F%92%E4%BB%B6%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3\#22-%E7%AE%A1%E7%90%86%E5%8F%B0configuration%E9%85%8D%E7%BD%AE%E4%BF%AE%E6%94%B9%E5%8F%AF%E9%80%89
-
-## 3. Use of spark engine
-
-### Preparation for operation, queue setting
-
-Because the execution of spark is a resource that requires a queue, the user must set up a queue that he can execute before executing.
-
-![](../Images/EngineUsage/queue-set.png)
-
-Figure 3-1 Queue settings
-
-### 3.1 How to use Scriptis
-
-The use of Scriptis is the simplest. You can directly enter Scriptis and create a new sql, scala or pyspark script for execution.
-
-The sql method is the simplest. You can create a new sql script and write and execute it. When it is executed, the progress will be displayed. If the user does not have a spark engine at the beginning, the execution of sql will start a spark session (it may take some time here),
-After the SparkSession is initialized, you can start to execute sql.
-
-![](../Images/EngineUsage/sparksql-run.png)
-
-Figure 3-2 Screenshot of the execution effect of sparksql
-
-For spark-scala tasks, we have initialized sqlContext and other variables, and users can directly use this sqlContext to execute sql.
-
-![](../Images/EngineUsage/scala-run.png)
-
-Figure 3-3 Execution effect diagram of spark-scala
-
-Similarly, in the way of pyspark, we have also initialized the SparkSession, and users can directly use spark.sql to execute SQL.
-
-![](../Images/EngineUsage/pyspakr-run.png)
-Figure 3-4 pyspark execution mode
-
-### 3.2 How to use workflow
-
-DSS workflow also has three spark nodes. You can drag in workflow nodes, such as sql, scala or pyspark nodes, and then double-click to enter and edit the code, and then execute in the form of workflow.
-
-![](../Images/EngineUsage/workflow.png)
-
-Figure 3-5 The node where the workflow executes spark
-
-### 3.3 How to use Linkis Client
-
-Linkis also provides a client method to call spark tasks, the call method is through the SDK provided by LinkisClient. We provide java and scala two ways to call, the specific usage can refer to <https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4 %BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>.
-
-## 4. Spark engine user settings
-
-In addition to the above engine configuration, users can also make custom settings, such as the number of spark session executors and the memory of the executors. These parameters are for users to set their own spark parameters more freely, and other spark parameters can also be modified, such as the python version of pyspark.
-
-![](../Images/EngineUsage/spark-conf.png)
-
-Figure 4-1 Spark user-defined configuration management console
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png b/Linkis-Doc-master/en_US/Images/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png
deleted file mode 100644
index 2e71b42..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/EngineConn/engineconn-01.png b/Linkis-Doc-master/en_US/Images/Architecture/EngineConn/engineconn-01.png
deleted file mode 100644
index d95da89..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/EngineConn/engineconn-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_dispatcher.png b/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_dispatcher.png
deleted file mode 100644
index 9cdc918..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_dispatcher.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_global.png b/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_global.png
deleted file mode 100644
index 584574e..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_global.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gatway_websocket.png b/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gatway_websocket.png
deleted file mode 100644
index fcac318..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gatway_websocket.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/execution.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/execution.png
deleted file mode 100644
index 1abc43b..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/execution.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png
deleted file mode 100644
index 9de0a5d..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/overall.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/overall.png
deleted file mode 100644
index 68b5e19..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/overall.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png
deleted file mode 100644
index 7998704..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png
deleted file mode 100644
index c2dd9f3..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/submission.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/submission.png
deleted file mode 100644
index f6bd9a9..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/submission.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_builder.png b/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_builder.png
deleted file mode 100644
index 4896981..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_builder.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_global.png b/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_global.png
deleted file mode 100644
index ca4151a..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_global.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_scorer.png b/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_scorer.png
deleted file mode 100644
index 7213b0b..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_scorer.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-NewEngine-architecture.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-NewEngine-architecture.png
deleted file mode 100644
index 57c83b3..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-NewEngine-architecture.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-services-list.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-services-list.png
deleted file mode 100644
index c669abf..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-services-list.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-EngineConn-architecture.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-EngineConn-architecture.png
deleted file mode 100644
index d95da89..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-EngineConn-architecture.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-NewEngine-architecture.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-NewEngine-architecture.png
deleted file mode 100644
index b1d60bf..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-NewEngine-architecture.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-architecture.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-architecture.png
deleted file mode 100644
index 825672b..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-architecture.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-newEngine-initialization.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-newEngine-initialization.png
deleted file mode 100644
index 003b38e..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-newEngine-initialization.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-services-list.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-services-list.png
deleted file mode 100644
index f768545..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-services-list.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/PublicEnhencementArchitecture.png b/Linkis-Doc-master/en_US/Images/Architecture/PublicEnhencementArchitecture.png
deleted file mode 100644
index bcf72a5..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/PublicEnhencementArchitecture.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png
deleted file mode 100644
index f61c49a..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png
deleted file mode 100644
index a2e1022..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png
deleted file mode 100644
index 5f4272f..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png
deleted file mode 100644
index 9bb177a..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png
deleted file mode 100644
index 00d1f4a..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png
deleted file mode 100644
index 439c8e2..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png
deleted file mode 100644
index 081d514..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png
deleted file mode 100644
index e343579..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png
deleted file mode 100644
index 012eb65..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png
deleted file mode 100644
index c3a43b9..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png
deleted file mode 100644
index 719599a..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png
deleted file mode 100644
index 2277a70..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png
deleted file mode 100644
index df58d96..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png
deleted file mode 100644
index 1e13445..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png
deleted file mode 100644
index 7e410fb..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png
deleted file mode 100644
index 097b7f1..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png
deleted file mode 100644
index 7a4d462..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png
deleted file mode 100644
index fdd6623..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png
deleted file mode 100644
index b366462..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png
deleted file mode 100644
index 2a1e403..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png
deleted file mode 100644
index 32336eb..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png
deleted file mode 100644
index fdb60fc..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png
deleted file mode 100644
index 45dcc43..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png
deleted file mode 100644
index 2175704..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png
deleted file mode 100644
index 9d357af..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png
deleted file mode 100644
index b08efd3..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png
deleted file mode 100644
index 13ca37e..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png
deleted file mode 100644
index 36a4d96..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png
deleted file mode 100644
index 0a5ae1d..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/bml-02.png b/Linkis-Doc-master/en_US/Images/Architecture/bml-02.png
deleted file mode 100644
index fed79f7..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/bml-02.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-engineConnPlugin-01.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-engineConnPlugin-01.png
deleted file mode 100644
index 2d2d134..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/linkis-engineConnPlugin-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-01.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-01.png
deleted file mode 100644
index 60b575d..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-02.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-02.png
deleted file mode 100644
index a31e681..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-02.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-01.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-01.png
deleted file mode 100644
index ac46424..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-03.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-03.png
deleted file mode 100644
index b53c8e1..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-03.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-publicService-01.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-publicService-01.png
deleted file mode 100644
index d503573..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Architecture/linkis-publicService-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/hive-config.png b/Linkis-Doc-master/en_US/Images/EngineUsage/hive-config.png
deleted file mode 100644
index 9b3df01..0000000
Binary files a/Linkis-Doc-master/en_US/Images/EngineUsage/hive-config.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/hive-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/hive-run.png
deleted file mode 100644
index 287b1ab..0000000
Binary files a/Linkis-Doc-master/en_US/Images/EngineUsage/hive-run.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-conf.png b/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-conf.png
deleted file mode 100644
index 39397d3..0000000
Binary files a/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-conf.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-run.png
deleted file mode 100644
index fe51598..0000000
Binary files a/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-run.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/pyspakr-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/pyspakr-run.png
deleted file mode 100644
index c80c85b..0000000
Binary files a/Linkis-Doc-master/en_US/Images/EngineUsage/pyspakr-run.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/python-config.png b/Linkis-Doc-master/en_US/Images/EngineUsage/python-config.png
deleted file mode 100644
index 2bf1791..0000000
Binary files a/Linkis-Doc-master/en_US/Images/EngineUsage/python-config.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/python-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/python-run.png
deleted file mode 100644
index 65467af..0000000
Binary files a/Linkis-Doc-master/en_US/Images/EngineUsage/python-run.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/queue-set.png b/Linkis-Doc-master/en_US/Images/EngineUsage/queue-set.png
deleted file mode 100644
index 735a670..0000000
Binary files a/Linkis-Doc-master/en_US/Images/EngineUsage/queue-set.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/scala-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/scala-run.png
deleted file mode 100644
index 7c01aad..0000000
Binary files a/Linkis-Doc-master/en_US/Images/EngineUsage/scala-run.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/shell-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/shell-run.png
deleted file mode 100644
index 734bdb2..0000000
Binary files a/Linkis-Doc-master/en_US/Images/EngineUsage/shell-run.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/spark-conf.png b/Linkis-Doc-master/en_US/Images/EngineUsage/spark-conf.png
deleted file mode 100644
index 353dbd6..0000000
Binary files a/Linkis-Doc-master/en_US/Images/EngineUsage/spark-conf.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/sparksql-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/sparksql-run.png
deleted file mode 100644
index f0b1d1b..0000000
Binary files a/Linkis-Doc-master/en_US/Images/EngineUsage/sparksql-run.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/workflow.png b/Linkis-Doc-master/en_US/Images/EngineUsage/workflow.png
deleted file mode 100644
index 3a5919f..0000000
Binary files a/Linkis-Doc-master/en_US/Images/EngineUsage/workflow.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Linkis_1.0_architecture.png b/Linkis-Doc-master/en_US/Images/Linkis_1.0_architecture.png
deleted file mode 100644
index 9b6cc90..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Linkis_1.0_architecture.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/Q&A.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/Q&A.png
deleted file mode 100644
index 121d7f3..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/Q&A.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/code-fix-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/code-fix-01.png
deleted file mode 100644
index 27bdddb..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/code-fix-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-01.png
deleted file mode 100644
index fa1f1c8..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-02.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-02.png
deleted file mode 100644
index c2f8443..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-02.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-01.png
deleted file mode 100644
index 9834b3d..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-02.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-02.png
deleted file mode 100644
index c7621b5..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-02.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-03.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-03.png
deleted file mode 100644
index 16788c3..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-03.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-04.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-04.png
deleted file mode 100644
index cb944ee..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-04.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-05.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-05.png
deleted file mode 100644
index 2c5972c..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-05.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-06.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-06.png
deleted file mode 100644
index a64cec6..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-06.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-07.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-07.png
deleted file mode 100644
index 935d5bc..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-07.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-08.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-08.png
deleted file mode 100644
index d2a3328..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-08.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/hive-config-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/hive-config-01.png
deleted file mode 100644
index 6bd0edb..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/hive-config-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-01.png
deleted file mode 100644
index 01090d1..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-02.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-02.png
deleted file mode 100644
index 0f68f12..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-02.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-03.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-03.png
deleted file mode 100644
index 8fb4464..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-03.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-04.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-04.png
deleted file mode 100644
index 5635a20..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-04.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-05.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-05.png
deleted file mode 100644
index c341a9d..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-05.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-06.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-06.png
deleted file mode 100644
index b0624ef..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-06.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-07.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-07.png
deleted file mode 100644
index 402f0c9..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-07.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-08.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-08.png
deleted file mode 100644
index 27c1824..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-08.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-09.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-09.png
deleted file mode 100644
index 5b27b4b..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-09.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-10.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-10.png
deleted file mode 100644
index 7c361e7..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-10.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-01.png
deleted file mode 100644
index d953cb6..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-02.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-02.png
deleted file mode 100644
index af273bb..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-02.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-03.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-03.png
deleted file mode 100644
index c36bb30..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-03.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/searching_keywords.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/searching_keywords.png
deleted file mode 100644
index cada716..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/searching_keywords.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-01.png
deleted file mode 100644
index 910150e..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-02.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-02.png
deleted file mode 100644
index 71d5e7e..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-02.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-03.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-03.png
deleted file mode 100644
index 4bb9cfe..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-03.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-04.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-04.png
deleted file mode 100644
index c2df857..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-04.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-05.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-05.png
deleted file mode 100644
index 3635584..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-05.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-01.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-01.png
deleted file mode 100644
index 9834b3d..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-02.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-02.png
deleted file mode 100644
index c7621b5..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-02.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-03.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-03.png
deleted file mode 100644
index 16788c3..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-03.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-04.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-04.png
deleted file mode 100644
index cb944ee..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-04.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-05.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-05.png
deleted file mode 100644
index 2c5972c..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-05.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-06.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-06.png
deleted file mode 100644
index a64cec6..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-06.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-07.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-07.png
deleted file mode 100644
index 935d5bc..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-07.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-08.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-08.png
deleted file mode 100644
index d2a3328..0000000
Binary files a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-08.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/deployment/Linkis1.0_combined_eureka.png b/Linkis-Doc-master/en_US/Images/deployment/Linkis1.0_combined_eureka.png
deleted file mode 100644
index 809dbee..0000000
Binary files a/Linkis-Doc-master/en_US/Images/deployment/Linkis1.0_combined_eureka.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/wedatasphere_contact_01.png b/Linkis-Doc-master/en_US/Images/wedatasphere_contact_01.png
deleted file mode 100644
index 5a3d80e..0000000
Binary files a/Linkis-Doc-master/en_US/Images/wedatasphere_contact_01.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Images/wedatasphere_stack_Linkis.png b/Linkis-Doc-master/en_US/Images/wedatasphere_stack_Linkis.png
deleted file mode 100644
index 36060b9..0000000
Binary files a/Linkis-Doc-master/en_US/Images/wedatasphere_stack_Linkis.png and /dev/null differ
diff --git a/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Configuration.md b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Configuration.md
deleted file mode 100644
index c4652ea..0000000
--- a/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Configuration.md
+++ /dev/null
@@ -1,217 +0,0 @@
-# Linkis1.0 Configurations
-
-> The configuration of Linkis1.0 is simplified on the basis of Linkis0.x. A public configuration file linkis.properties is provided in the conf directory to avoid the need for common configuration parameters to be configured in multiple microservices at the same time. This document will list the parameters of Linkis1.0 in modules.
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Please be noticed: This article only lists all the configuration parameters related to Linkis that have an impact on operating performance or environment dependence. Many configuration parameters that do not need users to care about have been omitted. If users are interested, they can browse through the source code.
-
-### 1 General configuration
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The general configuration can be set in the global linkis.properties, one setting, each microservice can take effect.
-
-#### 1.1 Global configurations
-
-| Parameter name | Default value | Description |
-| ------------------------- | ------- | --------------- --------------------------------------------|
-| wds.linkis.encoding | utf-8 | Linkis default encoding format |
-| wds.linkis.date.pattern | yyyy-MM-dd'T'HH:mm:ssZ | Default date format |
-| wds.linkis.test.mode | false | Whether to enable debugging mode, if set to true, all microservices support password-free login, and all EngineConn open remote debugging ports |
-| wds.linkis.test.user | None | When wds.linkis.test.mode=true, the default login user for password-free login |
-| wds.linkis.home | /appcom/Install/LinkisInstall | Linkis installation directory, if it does not exist, it will automatically get the value of LINKIS_HOME |
-| wds.linkis.httpclient.default.connect.timeOut | 50000 | Linkis HttpClient default connection timeout |
-
-#### 1.2 LDAP configurations
-
-| Parameter name | Default value | Description |
-| ------------------------- | ------- | --------------- --------------------------------------------|
-| wds.linkis.ldap.proxy.url | None | LDAP URL address |
-| wds.linkis.ldap.proxy.baseDN | None | LDAP baseDN address |
-| wds.linkis.ldap.proxy.userNameFormat | None | |
-
-#### 1.3 Hadoop configuration parameters
-
-| Parameter name | Default value | Description |
-| ------------------------- | ------- | --------------- --------------------------------------------|
-| wds.linkis.hadoop.root.user | hadoop | HDFS super user |
-| wds.linkis.filesystem.hdfs.root.path | None | User's HDFS default root path |
-| wds.linkis.keytab.enable | false | Whether to enable kerberos |
-| wds.linkis.keytab.file | /appcom/keytab | Kerberos keytab path, effective only when wds.linkis.keytab.enable=true |
-| wds.linkis.keytab.host.enabled | false | |
-| wds.linkis.keytab.host | 127.0.0.1 | |
-| hadoop.config.dir | None | If not configured, it will be read from the environment variable HADOOP_CONF_DIR |
-| wds.linkis.hadoop.external.conf.dir.prefix | /appcom/config/external-conf/hadoop | hadoop additional configuration |
-
-#### 1.4 Linkis RPC configuration parameters
-
-| Parameter name | Default value | Description |
-| ------------------------- | ------- | --------------- --------------------------------------------|
-| wds.linkis.rpc.broadcast.thread.num | 10 | Linkis RPC broadcast thread number (**Recommended default value**) |
-| wds.linkis.ms.rpc.sync.timeout | 60000 | Linkis RPC Receiver's default processing timeout time |
-| wds.linkis.rpc.eureka.client.refresh.interval | 1s | Refresh interval of Eureka client's microservice list (**Recommended default value**) |
-| wds.linkis.rpc.eureka.client.refresh.wait.time.max | 1m | Refresh maximum waiting time (**recommended default value**) |
-| wds.linkis.rpc.receiver.asyn.consumer.thread.max | 10 | Maximum number of Receiver Consumer threads (**If there are many online users, it is recommended to increase this parameter appropriately**) |
-| wds.linkis.rpc.receiver.asyn.consumer.freeTime.max | 2m | Receiver Consumer maximum idle time |
-| wds.linkis.rpc.receiver.asyn.queue.size.max | 1000 | The maximum number of buffers in the receiver consumption queue (**If there are many online users, it is recommended to increase this parameter appropriately**) |
-| wds.linkis.rpc.sender.asyn.consumer.thread.max", 5 | Sender Consumer maximum number of threads |
-| wds.linkis.rpc.sender.asyn.consumer.freeTime.max | 2m | Sender Consumer Maximum Free Time |
-| wds.linkis.rpc.sender.asyn.queue.size.max | 300 | Sender consumption queue maximum buffer number |
-
-### 2. Calculate governance configuration parameters
-
-#### 2.1 Entrance configuration parameters
-
-| Parameter name | Default value | Description |
-| ------------------------- | ------- | --------------- --------------------------------------------|
-| wds.linkis.spark.engine.version | 2.4.3 | The default Spark version used when the user submits a script without specifying a version |
-| wds.linkis.hive.engine.version | 1.2.1 | The default Hive version used when the user submits a script without a specified version |
-| wds.linkis.python.engine.version | python2 | The default Python version used when the user submits a script without specifying a version |
-| wds.linkis.jdbc.engine.version | 4 | The default JDBC version used when the user submits the script without specifying the version |
-| wds.linkis.shell.engine.version | 1 | The default shell version used when the user submits a script without specifying a version |
-| wds.linkis.appconn.engine.version | v1 | The default AppConn version used when the user submits a script without a specified version |
-| wds.linkis.entrance.scheduler.maxParallelismUsers | 1000 | Maximum number of concurrent users supported by Entrance |
-| wds.linkis.entrance.job.persist.wait.max | 5m | Maximum time for Entrance to wait for JobHistory to persist a Job |
-| wds.linkis.entrance.config.log.path | None | If not configured, the value of wds.linkis.filesystem.hdfs.root.path is used by default |
-| wds.linkis.default.requestApplication.name | IDE | The default submission system when the submission system is not specified |
-| wds.linkis.default.runType | sql | The default script type when the script type is not specified |
-| wds.linkis.warn.log.exclude | org.apache,hive.ql,hive.metastore,com.netflix,com.webank.wedatasphere | Real-time WARN-level logs that are not output to the client by default |
-| wds.linkis.log.exclude | org.apache, hive.ql, hive.metastore, com.netflix, com.webank.wedatasphere, com.webank | Real-time INFO-level logs that are not output to the client by default |
-| wds.linkis.instance | 3 | User's default number of concurrent jobs per engine |
-| wds.linkis.max.ask.executor.time | 5m | Apply to LinkisManager for the maximum time available for EngineConn |
-| wds.linkis.hive.special.log.include | org.apache.hadoop.hive.ql.exec.Task | When pushing Hive logs to the client, which logs are not filtered by default |
-| wds.linkis.spark.special.log.include | com.webank.wedatasphere.linkis.engine.spark.utils.JobProgressUtil | When pushing Spark logs to the client, which logs are not filtered by default |
-| wds.linkis.entrance.shell.danger.check.enabled | false | Whether to check and block dangerous shell syntax |
-| wds.linkis.shell.danger.usage | rm,sh,find,kill,python,for,source,hdfs,hadoop,spark-sql,spark-submit,pyspark,spark-shell,hive,yarn | Shell default Dangerous grammar |
-| wds.linkis.shell.white.usage | cd,ls | Shell whitelist syntax |
-| wds.linkis.sql.default.limit | 5000 | SQL default maximum return result set rows |
-
-
-#### 2.2 EngineConn configuration parameters
-
-| Parameter name | Default value | Description |
-| ------------------------- | ------- | --------------- --------------------------------------------|
-| wds.linkis.engineconn.resultSet.default.store.path | hdfs:///tmp | Job result set default storage path |
-| wds.linkis.engine.resultSet.cache.max | 0k | When the size of the result set is lower than how much, EngineConn will return to Entrance without placing the disk. |
-| wds.linkis.engine.default.limit | 5000 | |
-| wds.linkis.engine.lock.expire.time | 120000 | The maximum idle time of the engine lock, that is, after Entrance applies for the lock, how long does it take to submit code to EngineConn will be released |
-| wds.linkis.engineconn.ignore.words | org.apache.spark.deploy.yarn.Client | Logs that are ignored by default when the Engine pushes logs to the Entrance side |
-| wds.linkis.engineconn.pass.words | org.apache.hadoop.hive.ql.exec.Task | The log that must be pushed by default when the Engine pushes logs to the Entrance side |
-| wds.linkis.engineconn.heartbeat.time | 3m | Default heartbeat interval from EngineConn to LinkisManager |
-| wds.linkis.engineconn.max.free.time | 1h | EngineConn's maximum free time |
-
-
-#### 2.3 EngineConnManager configuration parameters
-
-| Parameter name | Default value | Description |
-| ------------------------- | ------- | --------------- --------------------------------------------|
-| wds.linkis.ecm.memory.max | 80g | ECM's maximum bootable EngineConn memory |
-| wds.linkis.ecm.cores.max | 50 | ECM's maximum number of CPUs that can start EngineConn |
-| wds.linkis.ecm.engineconn.instances.max | 50 | The maximum number of EngineConn that can be started, it is generally recommended to set the same as wds.linkis.ecm.cores.max |
-| wds.linkis.ecm.protected.memory | 4g | ECM protected memory, that is, the memory used by ECM to start EngineConn cannot exceed wds.linkis.ecm.memory.max-wds.linkis.ecm.protected.memory |
-| wds.linkis.ecm.protected.cores.max | 2 | The number of protected CPUs of ECM, the meaning is the same as wds.linkis.ecm.protected.memory |
-| wds.linkis.ecm.protected.engine.instances | 2 | Number of protected instances of ECM |
-| wds.linkis.engineconn.wait.callback.pid | 3s | Waiting time for EngineConn to return pid |
-
-#### 2.4 LinkisManager configuration parameters
-
-| Parameter name | Default value | Description |
-| ------------------------- | ------- | --------------- --------------------------------------------|
-| wds.linkis.manager.am.engine.start.max.time" | 10m | The maximum start time for LinkisManager to start a new EngineConn |
-| wds.linkis.manager.am.engine.reuse.max.time | 5m | LinkisManager reuses an existing EngineConn's maximum selection time |
-| wds.linkis.manager.am.engine.reuse.count.limit | 10 | LinkisManager reuses an existing EngineConn's maximum polling times |
-| wds.linkis.multi.user.engine.types | jdbc,es,presto | When LinkisManager reuses an existing EngineConn, which engine users are not used as reuse rules |
-| wds.linkis.rm.instance | 10 | The default maximum number of instances per user per engine |
-| wds.linkis.rm.yarnqueue.cores.max | 150 | Maximum number of cores per user in each engine usage queue |
-| wds.linkis.rm.yarnqueue.memory.max | 450g | The maximum amount of memory per user in each engine's use queue |
-| wds.linkis.rm.yarnqueue.instance.max | 30 | The maximum number of applications launched by each user in the queue of each engine |
-
-### 3. Each engine configuration parameter
-
-#### 3.1 JDBC engine configuration parameters
-
-| Parameter name | Default value | Description |
-| ------------------------- | ------- | --------------- --------------------------------------------|
-| wds.linkis.jdbc.default.limit | 5000 | The default maximum return result set rows |
-| wds.linkis.jdbc.support.dbs | mysql=>com.mysql.jdbc.Driver,postgresql=>org.postgresql.Driver,oracle=>oracle.jdbc.driver.OracleDriver,hive2=>org.apache.hive .jdbc.HiveDriver,presto=>com.facebook.presto.jdbc.PrestoDriver | Drivers supported by JDBC engine |
-| wds.linkis.engineconn.jdbc.concurrent.limit | 100 | Maximum number of concurrent SQL executions |
-
-
-#### 3.2 Python engine configuration parameters
-
-| Parameter name | Default value | Description |
-| ------------------------- | ------- | --------------- --------------------------------------------|
-| pythonVersion | /appcom/Install/anaconda3/bin/python | Python command path |
-| python.path | None | Specify an additional path for Python, which only accepts shared storage paths |
-
-#### 3.3 Spark engine configuration parameters
-
-| Parameter name | Default value | Description |
-| ------------------------- | ------- | --------------- --------------------------------------------|
-| wds.linkis.engine.spark.language-repl.init.time | 30s | Maximum initialization time for Scala and Python command interpreters |
-| PYSPARK_DRIVER_PYTHON | python | Python command path |
-| wds.linkis.server.spark-submit | spark-submit | spark-submit command path |
-
-### 4. PublicEnhancements configuration parameters
-
-#### 4.1 BML configuration parameters
-
-| Parameter name | Default value | Description |
-| ------------------------- | ------- | --------------- --------------------------------------------|
-| wds.linkis.bml.dws.version | v1 | Version number requested by Linkis Restful |
-| wds.linkis.bml.auth.token.key | Validation-Code | Password-free token-key for BML request |
-| wds.linkis.bml.auth.token.value | BML-AUTH | Password-free token-value requested by BML |
-| wds.linkis.bml.hdfs.prefix | /tmp/linkis | The prefix file path of the BML file stored on hdfs |
- 
-#### 4.2 Metadata configuration parameters
-
-| Parameter name | Default value | Description |
-| ------------------------- | ------- | --------------- --------------------------------------------|
-| hadoop.config.dir | /appcom/config/hadoop-config | If it does not exist, the value of the environment variable HADOOP_CONF_DIR is used by default |
-| hive.config.dir | /appcom/config/hive-config | If it does not exist, the value of the environment variable HIVE_CONF_DIR is used by default |
-| hive.meta.url | None | The URL of the HiveMetaStore database. If hive.config.dir is not configured, this value must be configured |
-| hive.meta.user | None | User of the HiveMetaStore database |
-| hive.meta.password | None | Password of the HiveMetaStore database |
-
-
-#### 4.3 JobHistory configuration parameters
-
-| Parameter name | Default value | Description |
-| ------------------------- | ------- | --------------- --------------------------------------------|
-| wds.linkis.jobhistory.admin | None | The default Admin account is used to specify which users can view the execution history of everyone |
-
-
-#### 4.4 FileSystem configuration parameters
-
-| Parameter name | Default value | Description |
-| ------------------------- | ------- | --------------- --------------------------------------------|
-| wds.linkis.filesystem.root.path | file:///tmp/linkis/ | User's Linux local root directory |
-| wds.linkis.filesystem.hdfs.root.path | hdfs:///tmp/ | User's HDFS root directory |
-| wds.linkis.workspace.filesystem.hdfsuserrootpath.suffix | /linkis/ | The first-level prefix after the user's HDFS root directory. The user's actual root directory is: ${hdfs.root.path}\${user}\${ hdfsuserrootpath.suffix} |
-| wds.linkis.workspace.resultset.download.is.limit | true | When Client downloads the result set, whether to limit the number of downloads |
-| wds.linkis.workspace.resultset.download.maxsize.csv | 5000 | When the result set is downloaded as a CSV file, the number of downloads is limited |
-| wds.linkis.workspace.resultset.download.maxsize.excel | 5000 | When the result set is downloaded as an Excel file, the number of downloads is limited |
-| wds.linkis.workspace.filesystem.get.timeout | 2000L | The maximum timeout period for requesting the underlying file system. (**If the performance of your HDFS or Linux machine is low, it is recommended to increase the check number appropriately**) |
-
-#### 4.5 UDF configuration parameters
-
-| Parameter name | Default value | Description |
-| ------------------------- | ------- | --------------- --------------------------------------------|
-| wds.linkis.udf.share.path | /mnt/bdap/udf | The storage path of the shared UDF, it is recommended to set it to the HDFS path |
-
-### 5. MicroService configuration parameters
-
-#### 5.1 Gateway configuration parameters
-
-| Parameter name | Default value | Description |
-| ------------------------- | ------- | --------------- --------------------------------------------|
-| wds.linkis.gateway.conf.enable.proxy.user | false | Whether to enable proxy user mode, if enabled, the login user’s request will be proxied to the proxy user for execution |
-| wds.linkis.gateway.conf.proxy.user.config | proxy.properties | Storage file of proxy rules |
-| wds.linkis.gateway.conf.proxy.user.scan.interval | 600000 | Proxy file refresh interval |
-| wds.linkis.gateway.conf.enable.token.auth | false | Whether to enable the Token login mode, if enabled, allow access to Linkis in the form of tokens |
-| wds.linkis.gateway.conf.token.auth.config | token.properties | Token rule storage file |
-| wds.linkis.gateway.conf.token.auth.scan.interval | 600000 | Token file refresh interval |
-| wds.linkis.gateway.conf.url.pass.auth | /dws/ | Request for default release without login verification |
-| wds.linkis.gateway.conf.enable.sso | false | Whether to enable SSO user login mode |
-| wds.linkis.gateway.conf.sso.interceptor | None | If the SSO login mode is enabled, the user needs to implement SSOInterceptor to jump to the SSO login page |
-| wds.linkis.admin.user | hadoop | Administrator user list |
-| wds.linkis.login_encrypt.enable | false | When the user logs in, does the password enable RSA encryption transmission |
-| wds.linkis.enable.gateway.auth | false | Whether to enable the Gateway IP whitelist mechanism |
-| wds.linkis.gateway.auth.file | auth.txt | IP whitelist storage file |
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Q&A.md b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Q&A.md
deleted file mode 100644
index c78f440..0000000
--- a/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Q&A.md
+++ /dev/null
@@ -1,255 +0,0 @@
-#### Q1, linkis startup error: NoSuchMethodErrorgetSessionManager()Lorg/eclipse/jetty/server/SessionManager
-
-Specific stack:
-```
-Failed startup of context osbwejJettyEmbeddedWebAppContext@6c6919ff{application,/,[file:///tmp/jetty-docbase.9102.6375358926927953589/],UNAVAILABLE} java.lang.NoSuchMethodError: org.eclipse.jetty.server.session.SessionHandler.getSessionManager ()Lorg/eclipse/jetty/server/SessionManager;
-at org.eclipse.jetty.servlet.ServletContextHandler\$Context.getSessionCookieConfig(ServletContextHandler.java:1415) ~[jetty-servlet-9.3.20.v20170531.jar:9.3.20.v20170531]
-```
-
-Solution: jetty-servlet and jetty-security versions need to be upgraded from 9.3.20 to 9.4.20;
-
-#### Q2. When starting the microservice linkis-ps-cs, report DebuggClassWriter overrides final method visit
-
-Specific exception stack:
-
-![linkis-exception-01.png](../Images/Tuning_and_Troubleshooting/linkis-exception-01.png)
-
-Solution: jar package conflict, delete asm-5.0.4.jar;
-
-#### Q3. When starting the microservice linkis-ps-datasource, JdbcUtils.getDriverClassName NPE
-
-Specific exception stack:
-
-![linkis-exception-02.png](../Images/Tuning_and_Troubleshooting/linkis-exception-02.png)
-
-
-Solution: caused by the Linkis-datasource configuration problem, modify the three parameters at the beginning of linkis.properties hive.meta:
-
-![hive-config-01.png](../Images/Tuning_and_Troubleshooting/hive-config-01.png)
-
-
-#### Q4. When starting the microservice linkis-ps-datasource, the following exception ClassNotFoundException HttpClient is reported:
-
-Specific exception stack:
-
-![linkis-exception-03.png](../Images/Tuning_and_Troubleshooting/linkis-exception-03.png)
-
-Solution: There is a problem with linkis-metadata-dev-1.0.0.jar compiled in 1.0, and it needs to be recompiled and packaged.
-
-#### Q5. Click scriptis-database, no data is returned, the phenomenon is as follows:
-
-![page-show-01.png](../Images/Tuning_and_Troubleshooting/page-show-01.png)
-
-Solution: The reason is that hive is not authorized to Hadoop users. The authorization data is as follows:
-
-![db-config-01.png](../Images/Tuning_and_Troubleshooting/db-config-01.png)
-
-#### Q6, shell engine scheduling execution, the page reports Insufficient resource, requesting available engine timeout, eneningeconnmanager linkis.out, and the following error is reported:
-
-![linkis-exception-04.png](../Images/Tuning_and_Troubleshooting/linkis-exception-04.png)
-
-Solution: The reason Hadoop did not create /appcom/tmp/hadoop/workDir. Create it in advance through the root user, and then authorize the Hadoop user.
-
-#### Q7. When the shell engine is scheduled for execution, the engine execution directory reports the following error /bin/java: No such file or directory:
-
-![shell-error-01.png](../Images/Tuning_and_Troubleshooting/shell-error-01.png)
-
-Solution: There is a problem with the local java environment variables, and you need to make a symbolic link to the java command.
-
-#### Q8, hive engine scheduling, the following error is reported EngineConnPluginNotFoundException:errorCode:70063
-
-![linkis-exception-05.png](../Images/Tuning_and_Troubleshooting/linkis-exception-05.png)
-
-Solution: It is caused by not modifying the version of the corresponding engine during installation, so the engine type inserted into the db by default is the default version, and the compiled version is not caused by the default version. Specific modification steps: cd /appcom/Install/dss-linkis/linkis/lib/linkis-engineconn-plugins/, modify the v2.1.1 directory name in the dist directory to v1.2.1 modify the subdirectory name in the plugin directory 2.1. 1 is 1.2.1 of the default versio [...]
-
-#### Q9. After the linkis microservice is started, the following error is reported: Load balancer does not have available server for client:
-
-![page-show-02.png](../Images/Tuning_and_Troubleshooting/page-show-02.png)
-
-Solution: This is because the linkis microservice has just started and the registration has not been completed. Wait for 1~2 minutes and try again.
-
-#### Q10. When the hive engine is scheduled for execution, the following error is reported: operation failed NullPointerException:
-
-![linkis-exception-06.png](../Images/Tuning_and_Troubleshooting/linkis-exception-06.png)
-
-
-Solution: The server lacks environment variables, add export HIVE_CONF_DIR=/etc/hive/conf in /etc/profile;
-
-#### Q11. When hive engine is scheduled, the error log of engineConnManager is as follows method did not exist: SessionHandler:
-
-![linkis-exception-07.png](../Images/Tuning_and_Troubleshooting/linkis-exception-07.png)
-
-Solution: Under the hive engine lib, the jetty jar package conflicts, replace jetty-security and jetty-server with 9.4.20;
-
-#### After Q12, hive engine restarts, the jar package of jetty 9.4 is always replaced by 9.3
-
-Solution: When the engine instance is generated, there will be a jar package cache. First, you need to delete the records related to the table linkis_engine_conn_plugin_bml_resources hive, and then delete the records under the directory /appcom/Install/dss-linkis/linkis/lib/linkis-engineconn-plugins/hive/dist 1.2.1.zip, finally restart the engineplugin service, the jar package of lib will be updated successfully.
-
-#### Q13. When the hive engine is executed, the following error is reported: Lcom/google/common/collect/UnmodifiableIterator:
-
-```
-2021-03-16 13:32:23.304 ERROR [pool-2-thread-1] com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor 140 run-query failed, reason: java.lang.IllegalAccessError: tried to access method com.google.common.collect.Iterators.emptyIterator() Lcom/google/common/collect/UnmodifiableIterator; from class org.apache.hadoop.hive.ql.exec.FetchOperator
-at org.apache.hadoop.hive.ql.exec.FetchOperator.<init>(FetchOperator.java:108) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
-at org.apache.hadoop.hive.ql.exec.FetchTask.initialize(FetchTask.java:86) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
-at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:629) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
-at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1414) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
-at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1543) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
-at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1332) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
-at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1321) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
-at com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor$$anon$1.run(HiveEngineConnExecutor.scala:152) [linkis-engineplugin-hive-dev-1.0.0.jar:?]
-at com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor$$anon$1.run(HiveEngineConnExecutor.scala:126) [linkis-engineplugin-hive-dev-1.0.0.jar:?]
-```
-
-Solution: guava package conflict, kill guava-25.1-jre.jar under hive/dist/v1.2.1/lib;
-
-#### Q14. When the hive engine is executed, the error is reported as follows: TaskExecutionServiceImpl 59 error-org/apache/curator/connection/ConnectionHandlingPolicy:
-
-```
-2021-03-16 16:17:40.649 INFO [pool-2-thread-1] com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor 42 info-com.webank.wedatasphere.linkis.engineplugin.hive. executor.HiveEngineConnExecutor@36a7c96f change status Busy => Idle.
-2021-03-16 16:17:40.661 ERROR [pool-2-thread-1] com.webank.wedatasphere.linkis.engineconn.computation.executor.service.TaskExecutionServiceImpl 59 error-org/apache/curator/connection/ConnectionHandlingPolicy java .lang.NoClassDefFoundError: org/apache/curator/connection/ConnectionHandlingPolicy at org.apache.curator.framework.CuratorFrameworkFactory.builder(CuratorFrameworkFactory.java:78) ~[curator-framework-4.0.1.jar:4.0.1]
-at org.apache.hadoop.hive.ql.lockmgr.zookeeper.CuratorFrameworkSingleton.getInstance(CuratorFrameworkSingleton.java:59) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
-at org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.setContext(ZooKeeperHiveLockManager.java:98) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
-at org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager.getLockManager(DummyTxnManager.java:87) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
-at org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager.acquireLocks(DummyTxnManager.java:121) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
-at org.apache.hadoop.hive.ql.Driver.acquireLocksAndOpenTxn(Driver.java:1237) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
-at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1607) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
-at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1332) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
-at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1321) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
-at com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor$$anon$1.run(HiveEngineConnExecutor.scala:152) ~[linkis-engineplugin-hive-dev-1.0.0.jar:?]
-at com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor$$anon$1.run(HiveEngineConnExecutor.scala:126) ~[linkis-engineplugin-hive-dev-1.0.0.jar:?]
-at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181]
-at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_181]
-at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) ~[hadoop-common-3.0.0-cdh6.3.2.jar:?]
-at com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor.executeLine(HiveEngineConnExecutor.scala:126) ~[linkis-engineplugin-hive-dev-1.0.0.jar:?]
-at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1$$anonfun$apply$9$$anonfun$apply$10.apply(ComputationExecutor.scala:145) ~[linkis-computation -engineconn-dev-1.0.0.jar:?]
-at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1$$anonfun$apply$9$$anonfun$apply$10.apply(ComputationExecutor.scala:144) ~[linkis-computation -engineconn-dev-1.0.0.jar:?]
-at com.webank.wedatasphere.linkis.common.utils.Utils$.tryCatch(Utils.scala:48) ~[linkis-common-dev-1.0.0.jar:?]
-at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1$$anonfun$apply$9.apply(ComputationExecutor.scala:146) ~[linkis-computation-engineconn-dev-1.0 .0.jar:?]
-at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1$$anonfun$apply$9.apply(ComputationExecutor.scala:140) ~[linkis-computation-engineconn-dev-1.0 .0.jar:?]
-at scala.collection.immutable.Range.foreach(Range.scala:160) ~[scala-library-2.11.8.jar:?]
-at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1.apply(ComputationExecutor.scala:139) ~[linkis-computation-engineconn-dev-1.0.0.jar:? ]
-at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1.apply(ComputationExecutor.scala:114) ~[linkis-computation-engineconn-dev-1.0.0.jar:? ]
-at com.webank.wedatasphere.linkis.common.utils.Utils$.tryFinally(Utils.scala:62) ~[linkis-common-dev-1.0.0.jar:?]
-at com.webank.wedatasphere.linkis.engineconn.acessible.executor.entity.AccessibleExecutor.ensureIdle(AccessibleExecutor.scala:42) ~[linkis-accessible-executor-dev-1.0.0.jar:?]
-at com.webank.wedatasphere.linkis.engineconn.acessible.executor.entity.AccessibleExecutor.ensureIdle(AccessibleExecutor.scala:36) ~[linkis-accessible-executor-dev-1.0.0.jar:?]
-at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor.ensureOp(ComputationExecutor.scala:103) ~[linkis-computation-engineconn-dev-1.0.0.jar:?]
-at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor.execute(ComputationExecutor.scala:114) ~[linkis-computation-engineconn-dev-1.0.0.jar:?]
-at com.webank.wedatasphere.linkis.engineconn.computation.executor.service.TaskExecutionServiceImpl$$anon$1$$anonfun$run$1.apply$mcV$sp(TaskExecutionServiceImpl.scala:139) [linkis-computation-engineconn-dev- 1.0.0.jar:?]
-at com.webank.wedatasphere.linkis.engineconn.computation.executor.service.TaskExecutionServiceImpl$$anon$1$$anonfun$run$1.apply(TaskExecutionServiceImpl.scala:138) [linkis-computation-engineconn-dev-1.0.0. jar:?]
-at com.webank.wedatasphere.linkis.engineconn.computation.executor.service.TaskExecutionServiceImpl$$anon$1$$anonfun$run$1.apply(TaskExecutionServiceImpl.scala:138) [linkis-computation-engineconn-dev-1.0.0. jar:?]
-at com.webank.wedatasphere.linkis.common.utils.Utils$.tryCatch(Utils.scala:48) [linkis-common-dev-1.0.0.jar:?]
-at com.webank.wedatasphere.linkis.common.utils.Utils$.tryAndWarn(Utils.scala:74) [linkis-common-dev-1.0.0.jar:?]
-at com.webank.wedatasphere.linkis.engineconn.computation.executor.service.TaskExecutionServiceImpl$$anon$1.run(TaskExecutionServiceImpl.scala:138) [linkis-computation-engineconn-dev-1.0.0.jar:?]
-at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181]
-at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181]
-at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
-at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
-at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
-Caused by: java.lang.ClassNotFoundException: org.apache.curator.connection.ConnectionHandlingPolicy atjava.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[?:1.8.0_181]
-at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:1.8.0_181]
-at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) ~[?:1.8.0_181]
-at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_181]
-... 39 more
-```
-
-Solution: The reason is that there is a corresponding relationship between the version of Curator and the version of zookeeper. For Curator2.X, it supports Zookeeper3.4.X for Curator2.X, so if you are currently Zookeeper3.4.X, you should still use Curator2.X, for example: 2.7.0. Reference link: https://blog.csdn.net/muyingmiao/article/details/100183768
-
-#### Q15. When the python engine is scheduled, the following error is reported: Python proces is not alive:
-
-![linkis-exception-08.png](../Images/Tuning_and_Troubleshooting/linkis-exception-08.png)
-
-Solution: The server installed the anaconda3 package manager. After debugging python, two problems were found: (1) lack of pandas and matplotlib modules, which need to be installed manually; (2) when the new version of the python engine is executed, it depends on the higher version of python, first install python3, Next, make a symbolic link (as shown in the figure below) and restart the engineplugin service.
-
-![shell-error-02.png](../Images/Tuning_and_Troubleshooting/shell-error-02.png)
-
-#### Q16. When the spark engine is executed, the following error NoClassDefFoundError: org/apache/hadoop/hive/ql/io/orc/OrcFile is reported:
-
-```
-2021-03-19 15:12:49.227 INFO [dag-scheduler-event-loop] org.apache.spark.scheduler.DAGScheduler 57 logInfo -ShuffleMapStage 5 (show at <console>:69) failed in 21.269 s due to Job aborted due to stage failure: Task 1 in stage 5.0 failed 4 times, most recent failure: Lost task 1.3 in stage 5.0 (TID 139, cdh03, executor 6): java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql /io/orc/OrcFile
-at org.apache.spark.sql.hive.orc.OrcFileOperator$$anonfun$getFileReader$2.apply(OrcFileOperator.scala:75)
-at org.apache.spark.sql.hive.orc.OrcFileOperator$$anonfun$getFileReader$2.apply(OrcFileOperator.scala:73)
-at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
-at scala.collection.TraversableOnce$class.collectFirst(TraversableOnce.scala:145)
-at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1334)
-at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:90)
-at org.apache.spark.sql.hive.orc.OrcFileOperator$$anonfun$readSchema$2.apply(OrcFileOperator.scala:99)
-at org.apache.spark.sql.hive.orc.OrcFileOperator$$anonfun$readSchema$2.apply(OrcFileOperator.scala:99)
-at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
-at scala.collection.TraversableOnce$class.collectFirst(TraversableOnce.scala:145)
-at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1334)
-at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:99)
-at org.apache.spark.sql.hive.orc.OrcFileFormat$$anonfun$buildReader$2.apply(OrcFileFormat.scala:160)
-at org.apache.spark.sql.hive.orc.OrcFileFormat$$anonfun$buildReader$2.apply(OrcFileFormat.scala:151)
-at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:148)
-at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
-at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:126)
-at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:179)
-at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:103)
-at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(UnknownSource)
-at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
-at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$11$$anon$1.hasNext(WholeStageCodegenExec.scala:624)
-at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
-at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
-at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
-at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
-at org.apache.spark.scheduler.Task.run(Task.scala:121)
-at org.apache.spark.executor.Executor$TaskRunner$$anonfun$11.apply(Executor.scala:407)
-at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1408)
-at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:413)
-at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
-at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
-at java.lang.Thread.run(Thread.java:748)
-Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.io.orc.OrcFile
-at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
-at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
-at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
-at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
-... 33 more
-
-```
-
-Solution: cdh6.3.2 cluster spark engine classpath only has /opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/spark/jars, need to add hive-exec-2.1.1- cdh6.1.0.jar, then restart spark.
-
-#### Q17. When the spark engine starts, it reports queue default is not exists in YARN, the specific information is as follows:
-
-![linkis-exception-09.png](../Images/Tuning_and_Troubleshooting/linkis-exception-09.png)
-
-Solution: When the 1.0 linkis-resource-manager-dev-1.0.0.jar pulls queue information, there is a compatibility problem in parsing json. After the official classmates optimize it, re-provide a new package. The jar package path: /appcom/Install/dss- linkis/linkis/lib/linkis-computation-governance/linkis-cg-linkismanager/.
-
-#### Q18, when the spark engine starts, an error is reported get the Yarn queue information excepiton. (get the Yarn queue information abnormal) and http link abnormal
-
-Solution: To migrate the address configuration of yarn to the DB configuration, the following configuration needs to be added:
- 
-![db-config-02.png](../Images/Tuning_and_Troubleshooting/db-config-02.png)
-
-#### Q19. When the spark engine is scheduled, it can be executed successfully for the first time, and if executed again, it will report Spark application sc has already stopped, please restart it. The specific errors are as follows:
-
-![page-show-03.png](../Images/Tuning_and_Troubleshooting/page-show-03.png)
-
-Solution: The background is that the architecture of the linkis1.0 engine has been adjusted. After the spark session is created, in order to avoid overhead and improve execution efficiency, the session is reused. When we execute spark.scala for the first time, there is spark.stop() in our script. This command will cause the newly created session to be closed. When executed again, it will prompt that the session is closed, please restart it. Solution: first remove stop() from all scripts, [...]
-
-#### Q20, pythonspark scheduling execution, error: initialize python executor failed ClassNotFoundException org.slf4j.impl.StaticLoggerBinder, as follows:
-
-![linkis-exception-10.png](../Images/Tuning_and_Troubleshooting/linkis-exception-10.png)
-
-Solution: The reason is that the spark server lacks slf4j-log4j12-1.7.25.jar, copy the above jar and report to /opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/spark/jars .
-
-#### Q21, pythonspark scheduling execution, error: initialize python executor failed, submit-version error, as follows:
-
-![shell-error-03.png](../Images/Tuning_and_Troubleshooting/shell-error-03.png)
-
-Solution: The reason is that the linkis1.0 pythonSpark engine has a bug in obtaining the spark version code. The fix is ​​as follows:
-
-![code-fix-01.png](../Images/Tuning_and_Troubleshooting/code-fix-01.png)
-
-#### Q22. When pythonspark is scheduled to execute, it reports TypeError: an integer is required (got type bytes) (executed separately from the command to pull up the engine), the details are as follows:
-
-![shell-error-04.png](../Images/Tuning_and_Troubleshooting/shell-error-04.png)
-
-Solution: The reason is that the system spark and python versions are not compatible, python is 3.8, spark is 2.4.0-cdh6.3.2, spark requires python version<=3.6, reduce python to 3.6, comment file /opt/cloudera/parcels/CDH/ The following lines of lib/spark/python/lib/pyspark.zip/pyspark/context.py:
-
-![shell-error-05.png](../Images/Tuning_and_Troubleshooting/shell-error-05.png)
-
-#### Q23, spark engine is 2.4.0+cdh6.3.2, python engine was previously lacking pandas, matplotlib upgraded local python to 3.8, but spark does not support python3.8, only supports below 3.6;
-
-Solution: reinstall the python package manager anaconda2, reduce python to 2.7, install pandas, matplotlib modules, python engine and spark engine can be scheduled normally.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/README.md b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/README.md
deleted file mode 100644
index a92dca4..0000000
--- a/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/README.md
+++ /dev/null
@@ -1,98 +0,0 @@
-## Tuning and troubleshooting
-
-In the process of preparing for the release of a version, we will try our best to find deployment and installation problems in advance and then repair them. Because everyone has some differences in the deployment environments, we sometimes have no way to predict all the problems and solutions in advance. However, due to the existence of the community, many of your problems will overlap. Perhaps the installation and deployment problems you have encountered have already been discovered and [...]
-
-### Ⅰ. How to locate the exception log
-
-If an interface request reports an error, we can locate the problematic microservice based on the return of the interface. Under normal circumstances, we can **locate according to the URL specification. **URLs in the Linkis interface follow certain design specifications. That is, the format of **/api/rest_j/v1/{applicationName}/.+**, the application name can be located through applicationName. Some applications themselves are microservices. At this time, the application name is the same  [...]
-
-| **ApplicationName** | **Microservice** |
-| -------------------- | -------------------- |
-| cg-linkismanager | cg-linkismanager |
-| cg-engineplugin | cg-engineplugin |
-| cg-engineconnmanager | cg-engineconnmanager |
-| cg-entrance | cg-entrance |
-| ps-bml | ps-bml |
-| ps-cs | ps-cs |
-| ps-datasource | ps-datasource |
-| configuration | |
-| instance-label | |
-| jobhistory | ps-publicservice |
-| variable | |
-| udf | |
-
-### Ⅱ. community issue column search keywords
-
-On the homepage of the github community, the issue column retains some of the problems and solutions encountered by community users, which is very suitable for quickly finding solutions after encountering problems, just search for keywords that report errors in the filter filter.
-
-### Ⅲ. "Q\&A Question Summary"
-
-"Linkis 1.0 FAQ", this document contains a summary of common problems and solutions during the installation and deployment process.
-
-### Ⅳ. Locating system log
-
-Generally, errors can be divided into three stages: an error is reported when installing and executing install.sh, an error is reported when the microservice is started, and an error is reported when the engine is started.
-
-1. **An error occurred when executing install.sh**, usually in the following situations
-
-   1. Missing environment variables: For example, the environment of java/python/Hadoop/hive/spark needs to be configured under the standard version, and the corresponding verification operation will be performed when the script is installed. If you encounter this kind of problem, there will be a lot of problems. Clear prompts for missing environment variables, such as exception -bash
-      spark-submit: command not found, etc.
-
-   2. The system version does not match: Linkis currently supports most versions of Linux.
-      The compatibility of the os version is the best, and some system versions may have command incompatibility. For example, the poor compatibility of yum in ubantu may cause yum-related errors in the installation and deployment. In addition, it is also recommended not to use windows as much as possible. Deploying linkis, currently no script is fully compatible with the .bat command.
-
-   3. Missing configuration item: There are two configuration files that need to be modified in linkis1.0 version, linkis-env.sh and db.sh
-   
-      The former contains the environment parameters that linkis needs to load during execution, and the latter is the database information that linkis itself needs to store related tables. Under normal circumstances, if the corresponding configuration is missing, the error message will show an exception related to the Key value. For example, when db.sh does not fill in the relevant database configuration, unknow will appear mysql server host ‘-P’ is abnormal, which is caused by missing host.
-
-2. **Report error when starting microservice**
-
-    Linkis puts the log files of all microservices into the logs directory. The log directory levels are as follows:
-
-    ````
-    ├── linkis-computation-governance
-    │ ├── linkis-cg-engineconnmanager
-    │ ├── linkis-cg-engineplugin
-    │ ├── linkis-cg-entrance
-    │ └── linkis-cg-linkismanager
-    ├── linkis-public-enhancements
-    │ ├── linkis-ps-bml
-    │ ├── linkis-ps-cs
-    │ ├── linkis-ps-datasource
-    │ └── linkis-ps-publicservice
-    └── linkis-spring-cloud-services
-    │ ├── linkis-mg-eureka
-    └─├── linkis-mg-gateway
-    ````
-
-    It includes three microservice modules: computing governance, public enhancement, and microservice management. Each microservice contains three logs, linkis-gc.log, linkis.log, and linkis.out, corresponding to the service's GC log, service log, and service System.out log.
-    
-    Under normal circumstances, when an error occurs when starting a microservice, you can cd to the corresponding service in the log directory to view the related log to troubleshoot the problem. Generally, the most frequently occurring problems can also be divided into three categories:
-
-    1.	**Port Occupation**: Since the default port of Linkis microservices is mostly concentrated at 9000, it is necessary to check whether the port of each microservice is occupied by other microservices before starting. If it is occupied, you need to change conf/ The microservice port corresponding to the linkis-env.sh file
-    
-    2.	**Necessary configuration parameters are missing**: For some microservices, certain user-defined parameters must be loaded before they can be started normally. For example, the linkis-cg-engineplugin microservice will load conf/ when it starts. For the configuration related to wds.linkis.engineconn.\* in linkis.properties, if the user changes the Linkis path after installation, if the configuration does not correspond to the modification, an error will be reported when the linkis- [...]
-    
-    3.	**System environment is not compatible**: It is recommended that users refer to the recommended system and application versions in the official documents as much as possible when deploying and installing, and install necessary system plug-ins, such as expect, yum, etc. If the application version is not compatible, It may cause errors related to the application. For example, the incompatibility of SQL statements in the mysql5.7 version may cause errors in the linkis.ddl and linkis. [...]
-    
-3. **Report error during microservice execution period**
-
-    The situation of error reporting during the execution of microservices is more complicated, and the situations encountered are also different depending on the environment, but the troubleshooting methods are basically the same. Starting from the corresponding microservice error catalog, we can roughly divide it into three situations:
-    
-    1. **Manually installed and deployed microservices report errors**: The logs of this type of microservice are unified under the log/ directory. After locating the microservice, enter the corresponding directory to view it.
-    
-    2. **engine start failure**: insufficient resources, request engine failure: When this type of error occurs, it is not necessarily due to insufficient resources, because the front end will only grab the logs after the Spring project is started, for errors before the engine is started cannot be fetched well. There are three kinds of high-frequency problems found in the actual use process of internal test users:
-    
-        a. **The engine cannot be created because there is no engine directory permission**: The log will be printed to the linkis.out file under the cg-engineconnmanager microservice. You need to enter the file to view the specific reason.
-        
-        b. **There is a dependency conflict in the engine lib package**, **The server cannot start normally because of insufficient memory resources: **Since the engine directory has been created, the log will be printed to the stdout file under the engine, and the engine path can refer to c
-        
-        c. **Errors reported during engine execution**: Each started engine is a microservice that is dynamically loaded and started during runtime. When the engine is started, if an error occurs, you need to find the corresponding log of the engine in the corresponding startup user directory. The corresponding root path is **ENGINECONN_ROOT_PATH** filled in **linkis-env** before installation. If you need to modify the path after installation, you need to modify wds.linkis.engineconn.roo [...]
-        
-### Ⅴ. Community user group consultation and communication
-
-For problems that cannot be resolved according to the above process positioning during the installation and deployment process, you can send error messages in our community group. In order to facilitate community partners and developers to help solve them and improve efficiency, it is recommended that when you ask questions, You can describe the problem phenomenon, related log information, and the places that have been checked are sent out together. If you think it may be an environmenta [...]
-
-### Ⅵ. locate the source code by remote debug
-
-Under normal circumstances, remote debugging of source code is the most effective way to locate problems, but compared to document review, users need to have a certain understanding of the source code structure. It is recommended that you check the [Linkis source code level detailed structure](https://github.com/WeBankFinTech/Linkis/wiki/Linkis%E6%BA%90%E7%A0%81%E5%B1%82%E7%BA%A7%E7%BB%93%E6%9E%84%E8%AF%A6%E8%A7%A3) in the Linkis WIKI before remote debugging.After having a certain degree [...]
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Tuning.md b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Tuning.md
deleted file mode 100644
index 2b6b256..0000000
--- a/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Tuning.md
+++ /dev/null
@@ -1,61 +0,0 @@
->Linkis0.x version runs stably on the production environment of WeBank, and supports various businesses. Linkis1.0 is an optimized version of 0.x, and the related tuning logic has not changed, so this document will introduce several Linkis deployment and tuning suggestions. Due to limited space, this article cannot cover all optimization scenarios. Related tuning guides will also be supplemented and updated. Of course, we also hope that community users will provide suggestions for Linkis [...]
-
-## 1. Overview
-
-This document will introduce several tuning methods based on production experience, namely the selection of Jvm heap size during deployment in production, the setting of concurrency for task submission, and the introduction of task running resource application parameters. The parameter settings described in the document are not recommended parameter values. Users need to select the parameters according to their actual production environment.
-
-## 2. Jvm heap size tuning 
-
-When installing Linkis, you can find the following variables in linkis-env.sh:
-
-```shell
-SERVER_HEAP_SIZE="512M"
-```
-
-After setting this variable, it will be added to the java startup parameters of each microservice during installation to control the Jvm startup heap size. Although the xms and xmx parameters need to be set when java is started, they are usually set to the same value. In production, as the number of users increases, this parameter needs to be adjusted larger to meet the needs. Of course, setting a larger stack memory requires a larger server configuration. Also, single-machine deployment [...]
-
-## 3. Tuning the concurrency of task submission
-
-Some Linkis task concurrency parameters will have a default value. In most scenarios, the default value can meet the demand, but sometimes it cannot, so it needs to be adjusted. This article will introduce several parameters for adjusting the concurrency of tasks to facilitate users to optimize concurrent tasks in production.
-
-Since tasks are submitted by RPC, in the linkis-common/linkis-rpc module, you can configure the following parameters to increase the number of concurrent rpc:
-
-```shell
-wds.linkis.rpc.receiver.asyn.consumer.thread.max=400
-wds.linkis.rpc.receiver.asyn.queue.size.max=5000
-wds.linkis.rpc.sender.asyn.consumer.thread.max=100
-wds.linkis.rpc.sender.asyn.queue.size.max=2000
-```
-
-In the Linkis source code, we set a default value for the number of concurrent tasks, which can meet the needs in most scenarios. However, when a large number of concurrent tasks are submitted for execution in some scenarios, such as when Qualitis (another open source project of WeBank) is used for mass data verification, initCapacity and maxCapacity have not been upgraded to a configurable item in the current version. Users need to modify, by increasing the values of these two parameter [...]
-
-```java
-  private val groupNameToGroups = new JMap[String, Group]
-  private val labelBuilderFactory = LabelBuilderFactoryContext.getLabelBuilderFactory
-
-  override def getOrCreateGroup(groupName: String): Group = {
-    if (!groupNameToGroups.containsKey(groupName)) synchronized {
-      val initCapacity = 100
-      val maxCapacity = 100
-      // 其它代码...
-        }
-      }
-```
-
-## 4. Resource settings related to task runtime
-
-When submitting a task to run on Yarn, Yarn provides a configurable interface. As a highly scalable framework, Linkis can also be configured to set resource configuration.
-
-The related configuration of Spark and Hive are as follows:
-
-Part of the Spark configuration in linkis-engineconn-plugins/engineconn-plugins, you can adjust the configuration to change the runtime environment of tasks submitted to Yarn. Due to limited space, such as more about Hive, Yarn configuration requires users to refer to the source code and the parameters documentation.
-
-```shell
-"spark.driver.memory" = 2 //单位为G
-"wds.linkis.driver.cores" = 1
-"spark.executor.memory" = 4 //单位为G
-"spark.executor.cores" = 2
-"spark.executor.instances" = 3
-"wds.linkis.rm.yarnqueue" = "default"
-```
-
diff --git a/Linkis-Doc-master/en_US/Upgrade_Documents/Linkis_Upgrade_from_0.x_to_1.0_guide.md b/Linkis-Doc-master/en_US/Upgrade_Documents/Linkis_Upgrade_from_0.x_to_1.0_guide.md
deleted file mode 100644
index dc1b867..0000000
--- a/Linkis-Doc-master/en_US/Upgrade_Documents/Linkis_Upgrade_from_0.x_to_1.0_guide.md
+++ /dev/null
@@ -1,73 +0,0 @@
- > This article briefly introduces the precautions for upgrading Linkis from 0.X to 1.0. Linkis 1.0 has adjusted several Linkis services with major changes. This article will introduce the precautions for upgrading from 0.X to 1.X.
-
-## 1.Precautions
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**If you are using Linkis for the first time, you can ignore this chapter; if you are already a user of Linkis, it is recommended to read it before installing or upgrading:[Brief description of the difference between Linkis1.0 and Linkis0.X](https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E4%B8%8ELinkis0.X%E7%9A%84%E5%8C%BA%E5%88%AB%E7%AE%80%E8%BF%B0)**.
-
-## 2. Service upgrade installation
-
-&nbsp;&nbsp;&nbsp;&nbsp;  Because linkis 1.0 basically upgraded all services, including service names, all services need to be reinstalled when upgrading from 0.X to 1.X.
-
-&nbsp;&nbsp;&nbsp;&nbsp;  If you need to keep 0.X data during the upgrade, you must select 1 to skip the table building statement (see the code below).
-
-&nbsp;&nbsp;&nbsp;&nbsp;  For the installation of Linkis1.0, please refer to [Quick Deployment Linkis1.0](../Deployment_Documents/Quick_Deploy_Linkis1.0.md)
-
-```
-Do you want to clear Linkis table information in the database?
-1: Do not execute table-building statements
-2: Dangerous! Clear all data and rebuild the tables
-other: exit
-
-Please input the choice: ## choice 1
-```
-## 3. Database upgrade
-
-&nbsp;&nbsp;&nbsp;&nbsp;  After the service is installed, the database structure needs to be modified, including table structure changes and new tables and data:
-
-### 3.1 Table structure modification part:
-
-&nbsp;&nbsp;&nbsp;&nbsp;  linkis_task: The submit_user and label_json fields are added to the table. The update statement is:
-
-```mysql-sql
-ALTER TABLE linkis_task ADD submit_user varchar(50) DEFAULT NULL COMMENT 'submitUser name';
-ALTER TABLE linkis_task ADD `label_json` varchar(200) DEFAULT NULL COMMENT 'label json';
-```
-
-### 3.2 Need newly executed sql:
-
-```mysql-sql
-cd db/module
-## Add the tables that the enginePlugin service depends on:
-source linkis_ecp.sql
-## Add a table that the public service-instanceLabel service depends on
-source linkis_instance_label.sql
-## Added tables that the linkis-manager service depends on
-source linkis_manager.sql
-```
-
-### 3.3 Publicservice-Configuration table modification
-
-&nbsp;&nbsp;&nbsp;&nbsp;  In order to support the full labeling capability of Linkis 1.X, all the data tables related to the configuration module have been upgraded to labeling, which is completely different from the 0.X Configuration table. It is necessary to re-execute the table creation statement and the initialization statement.
-
-&nbsp;&nbsp;&nbsp;&nbsp;  This means that **Linkis0.X users' existing engine configuration parameters can no longer be migrated to Linkis1.0** (it is recommended that users reconfigure the engine parameters once).
-
-&nbsp;&nbsp;&nbsp;&nbsp;  The execution of the table building statement is as follows:
-
-```mysql-sql
-source linkis_configuration.sql
-```
-
-&nbsp;&nbsp;&nbsp;&nbsp;  Because Linkis 1.0 supports multiple versions of the engine, it is necessary to modify the version of the engine when executing the initialization statement, as shown below:
-
-```mysql-sql
-vim linkis_configuration_dml.sql
-## Modify the default version of the corresponding engine
-SET @SPARK_LABEL="spark-2.4.3";
-SET @HIVE_LABEL="hive-1.2.1";
-## Execute the initialization statement
-source linkis_configuration_dml.sql
-```
-
-## 4. Installation and startup Linkis1.0
-
-&nbsp;&nbsp;&nbsp;&nbsp;  Start Linkis 1.0  to verify whether the service has been started normally and provide external services. For details, please refer to: [Quick Deployment Linkis1.0](../Deployment_Documents/Quick_Deploy_Linkis1.0.md)
diff --git a/Linkis-Doc-master/en_US/Upgrade_Documents/README.md b/Linkis-Doc-master/en_US/Upgrade_Documents/README.md
deleted file mode 100644
index 37786ab..0000000
--- a/Linkis-Doc-master/en_US/Upgrade_Documents/README.md
+++ /dev/null
@@ -1,5 +0,0 @@
-The architecture of Linkis1.0 is very different from Linkis0.x, and there are some changes to the configuration of the deployment package and database tables. Before you install Linkis1.0, please read the following instructions carefully:
-
-1. If you are installing Linkis for the first time, or reinstalling Linkis, you do not need to pay attention to the Linkis Upgrade Guide.
-
-2. If you are upgrading from Linkis0.x to Linkis1.0, be sure to read the [Linkis Upgrade from 0.x to 1.0 guide](Linkis_Upgrade_from_0.x_to_1.0_guide.md) carefully.
diff --git a/Linkis-Doc-master/en_US/User_Manual/How_To_Use_Linkis.md b/Linkis-Doc-master/en_US/User_Manual/How_To_Use_Linkis.md
deleted file mode 100644
index a6ee4d7..0000000
--- a/Linkis-Doc-master/en_US/User_Manual/How_To_Use_Linkis.md
+++ /dev/null
@@ -1,29 +0,0 @@
-# How to use Linkis?
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;In order to meet the needs of different usage scenarios, Linkis provides a variety of usage and access methods, which can be summarized into three categories, namely Client-side use, Scriptis-side use, and DataSphere It is used on the Studio side, among which Scriptis and DataSphere Studio are the open source data analysis platforms of the WeBank Big Data Platform Room. Since these two projects are essentially compatible with Linkis, it is  [...]
-
-## 1. Client side usage
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;If you need to connect to other applications on the basis of Linkis, you need to develop the interface provided by Linkis. Linkis provides a variety of client access interfaces. For detailed usage introduction, please refer to the following:
--[**Restful API Usage**](./../API_Documentations/Linkis task submission and execution RestAPI document.md)
--[**JDBC API Usage**](./../API_Documentations/Task Submit and Execute JDBC_API Document.md)
--[**How ​​to use Java SDK**](./../User_Manual/Linkis1.0 user use document.md)
-
-## 2. Scriptis uses Linkis
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;If you need to use Linkis to complete interactive online analysis and processing, and you do not need data analysis application tools such as workflow development, workflow scheduling, data services, etc., you can Install [**Scriptis**](https://github.com/WeBankFinTech/Scriptis) separately. For detailed installation tutorial, please refer to its corresponding installation and deployment documents.
-
-## 2.1. Use Scriptis to execute scripts
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Currently Scriptis supports submitting a variety of task types to Linkis, including Spark SQL, Hive SQL, Scala, PythonSpark, etc. In order to meet the needs of data analysis, the left side of Scriptis, Provides viewing user workspace information, user database and table information, user-defined functions, and HDFS directories. It also supports uploading and downloading, result set exporting and other functions. Scriptis is very simple to u [...]
-![Scriptis uses Linkis](../Images/EngineUsage/sparksql-run.png)
-
-## 2.2. Scriptis Management Console
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis provides an interface for resource configuration and management. If you want to configure and manage task resources, you can set it on the Scriptis management console interface, including queue settings and resource configuration , The number of engine instances, etc. Through the management console, you can easily configure the resources for submitting tasks to Linkis, making it more convenient and faster.
-![Scriptis uses Linkis](../Images/EngineUsage/queue-set.png)
-
-## 3. DataSphere Studio uses Linkis
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[**DataSphere Studio**](https://github.com/WeBankFinTech/DataSphereStudio), referred to as DSS, is an open source part of WeBank’s big data platform Station-type data analysis and processing platform, the DSS interactive analysis module integrates Scriptis. Using DSS for interactive analysis is the same as Scriptis. In addition to providing the basic functions of Scriptis, DSS provides and integrates richer and more powerful data analysis f [...]
-![DSS Run Workflow](../Images/EngineUsage/workflow.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/User_Manual/Linkis1.0_User_Manual.md b/Linkis-Doc-master/en_US/User_Manual/Linkis1.0_User_Manual.md
deleted file mode 100644
index b613f88..0000000
--- a/Linkis-Doc-master/en_US/User_Manual/Linkis1.0_User_Manual.md
+++ /dev/null
@@ -1,400 +0,0 @@
-# Linkis User Manual
-
-> Linkis provides a convenient interface for calling JAVA and SCALA. It can be used only by introducing the linkis-computation-client module. After 1.0, the method of submitting with Label is added. The following will introduce both ways that compatible with 0.X and newly added in 1.0.
-
-## 1. Introduce dependent modules
-```
-<dependency>
-   <groupId>com.webank.wedatasphere.linkis</groupId>
-   <artifactId>linkis-computation-client</artifactId>
-   <version>${linkis.version}</version>
-</dependency>
-Such as:
-<dependency>
-   <groupId>com.webank.wedatasphere.linkis</groupId>
-   <artifactId>linkis-computation-client</artifactId>
-   <version>1.0.0-RC1</version>
-</dependency>
-```
-
-## 2. Compatible with 0.X Execute method submission
-
-### 2.1 Java test code
-
-Create the Java test class UJESClientImplTestJ. Refer to the comments to understand the purposes of those interfaces:
-
-```java
-package com.webank.wedatasphere.linkis.client.test;
-
-import com.webank.wedatasphere.linkis.common.utils.Utils;
-import com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy;
-import com.webank.wedatasphere.linkis.httpclient.dws.authentication.TokenAuthenticationStrategy;
-import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfig;
-import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder;
-import com.webank.wedatasphere.linkis.ujes.client.UJESClient;
-import com.webank.wedatasphere.linkis.ujes.client.UJESClientImpl;
-import com.webank.wedatasphere.linkis.ujes.client.request.JobExecuteAction;
-import com.webank.wedatasphere.linkis.ujes.client.request.ResultSetAction;
-import com.webank.wedatasphere.linkis.ujes.client.response.JobExecuteResult;
-import com.webank.wedatasphere.linkis.ujes.client.response.JobInfoResult;
-import com.webank.wedatasphere.linkis.ujes.client.response.JobProgressResult;
-import org.apache.commons.io.IOUtils;
-
-import java.util.HashMap;
-import java.util.Map;
-import java.util.concurrent.TimeUnit;
-
-public class LinkisClientTest {
-
-    public static void main(String[] args){
-
-        String user = "hadoop";
-        String executeCode = "show databases;";
-
-        // 1. Configure DWSClientBuilder, get a DWSClientConfig through DWSClientBuilder
-        DWSClientConfig clientConfig = ((DWSClientConfigBuilder) (DWSClientConfigBuilder.newBuilder()
-                .addServerUrl("http://${ip}:${port}")  //Specify ServerUrl, the address of the linkis gateway, such as http://{ip}:{port}
-                .connectionTimeout(30000)   //connectionTimeOut Client connection timeout
-                .discoveryEnabled(false).discoveryFrequency(1, TimeUnit.MINUTES)  //Whether to enable registration discovery, if enabled, the newly launched Gateway will be automatically discovered
-                .loadbalancerEnabled(true)  // Whether to enable load balancing, if registration discovery is not enabled, load balancing is meaningless
-                .maxConnectionSize(5)   //Specify the maximum number of connections, that is, the maximum number of concurrent
-                .retryEnabled(false).readTimeout(30000)   //Execution failed, whether to allow retry
-                .setAuthenticationStrategy(new StaticAuthenticationStrategy())   //AuthenticationStrategy Linkis login authentication method
-                .setAuthTokenKey("${username}").setAuthTokenValue("${password}")))  //Authentication key, generally the user name; authentication value, generally the password corresponding to the user name
-                .setDWSVersion("v1").build();  //The version of the linkis backend protocol, the current version is v1
-
-        // 2. Obtain a UJESClient through DWSClientConfig
-        UJESClient client = new UJESClientImpl(clientConfig);
-
-        try {
-            // 3. Start code execution
-            System.out.println("user : " + user + ", code : [" + executeCode + "]");
-            Map<String, Object> startupMap = new HashMap<String, Object>();
-            startupMap.put("wds.linkis.yarnqueue", "default"); // A variety of startup parameters can be stored in startupMap, see linkis management console configuration
-            JobExecuteResult jobExecuteResult = client.execute(JobExecuteAction.builder()
-                    .setCreator("linkisClient-Test")  //creator,the system name of the client requesting linkis, used for system-level isolation
-                    .addExecuteCode(executeCode)   //ExecutionCode Requested code
-                    .setEngineType((JobExecuteAction.EngineType) JobExecuteAction.EngineType$.MODULE$.HIVE()) // The execution engine type of the linkis that you want to request, such as Spark hive, etc.
-                    .setUser(user)   //User,Requesting users; used for user-level multi-tenant isolation
-                    .setStartupParams(startupMap)
-                    .build());
-            System.out.println("execId: " + jobExecuteResult.getExecID() + ", taskId: " + jobExecuteResult.taskID());
-
-            // 4. Get the execution status of the script
-            JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
-            int sleepTimeMills = 1000;
-            while(!jobInfoResult.isCompleted()) {
-                // 5. Get the execution progress of the script
-                JobProgressResult progress = client.progress(jobExecuteResult);
-                Utils.sleepQuietly(sleepTimeMills);
-                jobInfoResult = client.getJobInfo(jobExecuteResult);
-            }
-
-            // 6. Get the job information of the script
-            JobInfoResult jobInfo = client.getJobInfo(jobExecuteResult);
-            // 7. Get a list of result sets (if the user submits multiple SQL at a time, multiple result sets will be generated)
-            String resultSet = jobInfo.getResultSetList(client)[0];
-            // 8. Get a specific result set through a result set information
-            Object fileContents = client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build()).getFileContent();
-            System.out.println("fileContents: " + fileContents);
-
-        } catch (Exception e) {
-            e.printStackTrace();
-            IOUtils.closeQuietly(client);
-        }
-        IOUtils.closeQuietly(client);
-    }
-}
-```
-
-Run the above code to interact with Linkis
-
-### 3. Scala test code:
-
-```scala
-package com.webank.wedatasphere.linkis.client.test
-
-import java.util.concurrent.TimeUnit
-
-import com.webank.wedatasphere.linkis.common.utils.Utils
-import com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy
-import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder
-import com.webank.wedatasphere.linkis.ujes.client.UJESClient
-import com.webank.wedatasphere.linkis.ujes.client.request.JobExecuteAction.EngineType
-import com.webank.wedatasphere.linkis.ujes.client.request.{JobExecuteAction, ResultSetAction}
-import org.apache.commons.io.IOUtils
-
-object LinkisClientImplTest extends App {
-
-  var executeCode = "show databases;"
-  var user = "hadoop"
-
-  // 1. Configure DWSClientBuilder, get a DWSClientConfig through DWSClientBuilder
-  val clientConfig = DWSClientConfigBuilder.newBuilder()
-    .addServerUrl("http://${ip}:${port}") //Specify ServerUrl, the address of the Linkis server-side gateway, such as http://{ip}:{port}
-    .connectionTimeout(30000) //connectionTimeOut client connection timeout
-    .discoveryEnabled(false).discoveryFrequency(1, TimeUnit.MINUTES) //Whether to enable registration discovery, if enabled, the newly launched Gateway will be automatically discovered
-    .loadbalancerEnabled(true) // Whether to enable load balancing, if registration discovery is not enabled, load balancing is meaningless
-    .maxConnectionSize(5) //Specify the maximum number of connections, that is, the maximum number of concurrent
-    .retryEnabled(false).readTimeout(30000) //execution failed, whether to allow retry
-    .setAuthenticationStrategy(new StaticAuthenticationStrategy()) //AuthenticationStrategy Linkis authentication method
-    .setAuthTokenKey("${username}").setAuthTokenValue("${password}") //Authentication key, generally the user name; authentication value, generally the password corresponding to the user name
-    .setDWSVersion("v1").build() //Linkis backend protocol version, the current version is v1
-
-  // 2. Get a UJESClient through DWSClientConfig
-  val client = UJESClient(clientConfig)
-  
-  try {
-    // 3. Start code execution
-    println("user: "+ user + ", code: [" + executeCode + "]")
-    val startupMap = new java.util.HashMap[String, Any]()
-    startupMap.put("wds.linkis.yarnqueue", "default") //Startup parameter configuration
-    val jobExecuteResult = client.execute(JobExecuteAction.builder()
-      .setCreator("LinkisClient-Test") //creator, requesting the system name of the Linkis client, used for system-level isolation
-      .addExecuteCode(executeCode) //ExecutionCode The code to be executed
-      .setEngineType(EngineType.SPARK) // The execution engine type of Linkis that you want to request, such as Spark hive, etc.
-      .setStartupParams(startupMap)
-      .setUser(user).build()) //User, request user; used for user-level multi-tenant isolation
-    println("execId: "+ jobExecuteResult.getExecID + ", taskId:" + jobExecuteResult.taskID)
-    
-    // 4. Get the execution status of the script
-    var jobInfoResult = client.getJobInfo(jobExecuteResult)
-    val sleepTimeMills: Int = 1000
-    while (!jobInfoResult.isCompleted) {
-      // 5. Get the execution progress of the script
-      val progress = client.progress(jobExecuteResult)
-      val progressInfo = if (progress.getProgressInfo != null) progress.getProgressInfo.toList else List.empty
-      println("progress: "+ progress.getProgress + ", progressInfo:" + progressInfo)
-      Utils.sleepQuietly(sleepTimeMills)
-      jobInfoResult = client.getJobInfo(jobExecuteResult)
-    }
-    if (!jobInfoResult.isSucceed) {
-      println("Failed to execute job: "+ jobInfoResult.getMessage)
-      throw new Exception(jobInfoResult.getMessage)
-    }
-
-    // 6. Get the job information of the script
-    val jobInfo = client.getJobInfo(jobExecuteResult)
-    // 7. Get the list of result sets (if the user submits multiple SQL at a time, multiple result sets will be generated)
-    val resultSetList = jobInfoResult.getResultSetList(client)
-    println("All result set list:")
-    resultSetList.foreach(println)
-    val oneResultSet = jobInfo.getResultSetList(client).head
-    // 8. Get a specific result set through a result set information
-    val fileContents = client.resultSet(ResultSetAction.builder().setPath(oneResultSet).setUser(jobExecuteResult.getUser).build()).getFileContent
-    println("First fileContents: ")
-    println(fileContents)
-  } catch {
-    case e: Exception => {
-      e.printStackTrace()
-    }
-  }
-  IOUtils.closeQuietly(client)
-}
-```
-
-## 3. Linkis1.0 new submit interface with Label support
-
-Linkis1.0 adds the client.submit method, which is used to adapt with the new task execution interface of 1.0, and supports the input of Label and other parameters
-
-### 3.1 Java Test Class
-
-```java
-package com.webank.wedatasphere.linkis.client.test;
-
-import com.webank.wedatasphere.linkis.common.utils.Utils;
-import com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy;
-import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfig;
-import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder;
-import com.webank.wedatasphere.linkis.manager.label.constant.LabelKeyConstant;
-import com.webank.wedatasphere.linkis.protocol.constants.TaskConstant;
-import com.webank.wedatasphere.linkis.ujes.client.UJESClient;
-import com.webank.wedatasphere.linkis.ujes.client.UJESClientImpl;
-import com.webank.wedatasphere.linkis.ujes.client.request.JobSubmitAction;
-import com.webank.wedatasphere.linkis.ujes.client.request.ResultSetAction;
-import com.webank.wedatasphere.linkis.ujes.client.response.JobExecuteResult;
-import com.webank.wedatasphere.linkis.ujes.client.response.JobInfoResult;
-import com.webank.wedatasphere.linkis.ujes.client.response.JobProgressResult;
-import org.apache.commons.io.IOUtils;
-
-import java.util.HashMap;
-import java.util.Map;
-import java.util.concurrent.TimeUnit;
-
-public class JavaClientTest {
-
-    public static void main(String[] args){
-
-        String user = "hadoop";
-        String executeCode = "show tables";
-
-        // 1. Configure ClientBuilder and get ClientConfig
-        DWSClientConfig clientConfig = ((DWSClientConfigBuilder) (DWSClientConfigBuilder.newBuilder()
-                .addServerUrl("http://${ip}:${port}") //Specify ServerUrl, the address of the linkis server-side gateway, such as http://{ip}:{port}
-                .connectionTimeout(30000) //connectionTimeOut client connection timeout
-                .discoveryEnabled(false).discoveryFrequency(1, TimeUnit.MINUTES) //Whether to enable registration discovery, if enabled, the newly launched Gateway will be automatically discovered
-                .loadbalancerEnabled(true) // Whether to enable load balancing, if registration discovery is not enabled, load balancing is meaningless
-                .maxConnectionSize(5) //Specify the maximum number of connections, that is, the maximum number of concurrent
-                .retryEnabled(false).readTimeout(30000) //execution failed, whether to allow retry
-                .setAuthenticationStrategy(new StaticAuthenticationStrategy()) //AuthenticationStrategy Linkis authentication method
-                .setAuthTokenKey("${username}").setAuthTokenValue("${password}"))) //Authentication key, generally the user name; authentication value, generally the password corresponding to the user name
-                .setDWSVersion("v1").build(); //Linkis background protocol version, the current version is v1
-
-        // 2. Get a UJESClient through DWSClientConfig
-        UJESClient client = new UJESClientImpl(clientConfig);
-
-        try {
-            // 3. Start code execution
-            System.out.println("user: "+ user + ", code: [" + executeCode + "]");
-            Map<String, Object> startupMap = new HashMap<String, Object>();
-            // A variety of startup parameters can be stored in startupMap, see linkis management console configuration
-            startupMap.put("wds.linkis.yarnqueue", "q02");
-            //Specify Label
-            Map<String, Object> labels = new HashMap<String, Object>();
-            //Add the label that this execution depends on: EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel
-            labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "hive-1.2.1");
-            labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");
-            labels.put(LabelKeyConstant.ENGINE_RUN_TYPE_KEY, "hql");
-            //Specify source
-            Map<String, Object> source = new HashMap<String, Object>();
-            source.put(TaskConstant.SCRIPTPATH, "LinkisClient-test");
-            JobExecuteResult jobExecuteResult = client.submit( JobSubmitAction.builder()
-                    .addExecuteCode(executeCode)
-                    .setStartupParams(startupMap)
-                    .setUser(user)//Job submit user
-                    .addExecuteUser(user)//The actual execution user
-                    .setLabels(labels)
-                    .setSource(source)
-                    .build()
-            );
-            System.out.println("execId: "+ jobExecuteResult.getExecID() + ", taskId:" + jobExecuteResult.taskID());
-
-            // 4. Get the execution status of the script
-            JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
-            int sleepTimeMills = 1000;
-            while(!jobInfoResult.isCompleted()) {
-                // 5. Get the execution progress of the script
-                JobProgressResult progress = client.progress(jobExecuteResult);
-                Utils.sleepQuietly(sleepTimeMills);
-                jobInfoResult = client.getJobInfo(jobExecuteResult);
-            }
-
-            // 6. Get the job information of the script
-            JobInfoResult jobInfo = client.getJobInfo(jobExecuteResult);
-            // 7. Get the list of result sets (if the user submits multiple SQL at a time, multiple result sets will be generated)
-            String resultSet = jobInfo.getResultSetList(client)[0];
-            // 8. Get a specific result set through a result set information
-            Object fileContents = client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build()).getFileContent();
-            System.out.println("fileContents: "+ fileContents);
-
-        } catch (Exception e) {
-            e.printStackTrace();
-            IOUtils.closeQuietly(client);
-        }
-        IOUtils.closeQuietly(client);
-    }
-}
-
-```
-
-### 3.2 Scala Test Class
-
-```scala
-package com.webank.wedatasphere.linkis.client.test
-
-import java.util
-import java.util.concurrent.TimeUnit
-
-import com.webank.wedatasphere.linkis.common.utils.Utils
-import com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy
-import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder
-import com.webank.wedatasphere.linkis.manager.label.constant.LabelKeyConstant
-import com.webank.wedatasphere.linkis.protocol.constants.TaskConstant
-import com.webank.wedatasphere.linkis.ujes.client.UJESClient
-import com.webank.wedatasphere.linkis.ujes.client.request.{JobSubmitAction, ResultSetAction}
-import org.apache.commons.io.IOUtils
-
-
-object ScalaClientTest {
-
-  def main(args: Array[String]): Unit = {
-    val executeCode = "show tables"
-    val user = "hadoop"
-
-    // 1. Configure DWSClientBuilder, get a DWSClientConfig through DWSClientBuilder
-    val clientConfig = DWSClientConfigBuilder.newBuilder()
-      .addServerUrl("http://${ip}:${port}") //Specify ServerUrl, the address of the Linkis server-side gateway, such as http://{ip}:{port}
-      .connectionTimeout(30000) //connectionTimeOut client connection timeout
-      .discoveryEnabled(false).discoveryFrequency(1, TimeUnit.MINUTES) //Whether to enable registration discovery, if enabled, the newly launched Gateway will be automatically discovered
-      .loadbalancerEnabled(true) // Whether to enable load balancing, if registration discovery is not enabled, load balancing is meaningless
-      .maxConnectionSize(5) //Specify the maximum number of connections, that is, the maximum number of concurrent
-      .retryEnabled(false).readTimeout(30000) //execution failed, whether to allow retry
-      .setAuthenticationStrategy(new StaticAuthenticationStrategy()) //AuthenticationStrategy Linkis authentication method
-      .setAuthTokenKey("${username}").setAuthTokenValue("${password}") //Authentication key, generally the user name; authentication value, generally the password corresponding to the user name
-      .setDWSVersion("v1").build() //Linkis backend protocol version, the current version is v1
-
-    // 2. Get a UJESClient through DWSClientConfig
-    val client = UJESClient(clientConfig)
-
-    try {
-      // 3. Start code execution
-      println("user: "+ user + ", code: [" + executeCode + "]")
-      val startupMap = new java.util.HashMap[String, Any]()
-      startupMap.put("wds.linkis.yarnqueue", "q02") //Startup parameter configuration
-      //Specify Label
-      val labels: util.Map[String, Any] = new util.HashMap[String, Any]
-      //Add the label that this execution depends on, such as engineLabel
-      labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "hive-1.2.1")
-      labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE")
-      labels.put(LabelKeyConstant.ENGINE_RUN_TYPE_KEY, "hql")
-      //Specify source
-      val source: util.Map[String, Any] = new util.HashMap[String, Any]
-      source.put(TaskConstant.SCRIPTPATH, "LinkisClient-test")
-      val jobExecuteResult = client.submit(JobSubmitAction.builder
-          .addExecuteCode(executeCode)
-          .setStartupParams(startupMap)
-          .setUser(user) //Job submit user
-          .addExecuteUser(user) //The actual execution user
-          .setLabels(labels)
-          .setSource(source)
-          .build) //User, requesting user; used for user-level multi-tenant isolation
-      println("execId: "+ jobExecuteResult.getExecID + ", taskId:" + jobExecuteResult.taskID)
-
-      // 4. Get the execution status of the script
-      var jobInfoResult = client.getJobInfo(jobExecuteResult)
-      val sleepTimeMills: Int = 1000
-      while (!jobInfoResult.isCompleted) {
-        // 5. Get the execution progress of the script
-        val progress = client.progress(jobExecuteResult)
-        val progressInfo = if (progress.getProgressInfo != null) progress.getProgressInfo.toList else List.empty
-        println("progress: "+ progress.getProgress + ", progressInfo:" + progressInfo)
-        Utils.sleepQuietly(sleepTimeMills)
-        jobInfoResult = client.getJobInfo(jobExecuteResult)
-      }
-      if (!jobInfoResult.isSucceed) {
-        println("Failed to execute job: "+ jobInfoResult.getMessage)
-        throw new Exception(jobInfoResult.getMessage)
-      }
-
-      // 6. Get the job information of the script
-      val jobInfo = client.getJobInfo(jobExecuteResult)
-      // 7. Get the list of result sets (if the user submits multiple SQL at a time, multiple result sets will be generated)
-      val resultSetList = jobInfoResult.getResultSetList(client)
-      println("All result set list:")
-      resultSetList.foreach(println)
-      val oneResultSet = jobInfo.getResultSetList(client).head
-      // 8. Get a specific result set through a result set information
-      val fileContents = client.resultSet(ResultSetAction.builder().setPath(oneResultSet).setUser(jobExecuteResult.getUser).build()).getFileContent
-      println("First fileContents: ")
-      println(fileContents)
-    } catch {
-      case e: Exception => {
-        e.printStackTrace()
-      }
-    }
-    IOUtils.closeQuietly(client)
-  }
-
-}
-
-```
diff --git a/Linkis-Doc-master/en_US/User_Manual/LinkisCli_Usage_document.md b/Linkis-Doc-master/en_US/User_Manual/LinkisCli_Usage_document.md
deleted file mode 100644
index 0188013..0000000
--- a/Linkis-Doc-master/en_US/User_Manual/LinkisCli_Usage_document.md
+++ /dev/null
@@ -1,191 +0,0 @@
-Linkis-Cli usage documentation
-============
-
-## Introduction
-
-Linkis-Cli is a shell command line program used to submit tasks to Linkis.
-
-## Basic case
-
-You can simply submit a task to Linkis by referring to the example below
-
-The first step is to check whether the default configuration file `linkis-cli.properties` exists in the conf/ directory, and it contains the following configuration:
-
-```properties
-   wds.linkis.client.common.gatewayUrl=http://127.0.0.1:9001
-   wds.linkis.client.common.authStrategy=token
-   wds.linkis.client.common.tokenKey=Validation-Code
-   wds.linkis.client.common.tokenValue=BML-AUTH
-```
-
-The second step is to enter the linkis installation directory and enter the command:
-
-```bash
-    ./bin/linkis-client -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop 
-```
-
-In the third step, you will see the information on the console that the task has been submitted to linkis and started to execute.
-
-Linkis-cli currently only supports synchronous submission, that is, after submitting a task to linkis, it will continue to inquire about the task status and pull task logs until the task ends. If the status is successful at the end of the task, linkis-cli will also actively pull the result set and output it.
-
-
-## How to use
-
-```bash
-   ./bin/linkis-client [parameter] [cli parameter]
-```
-
-## Supported parameter list
-
-* cli parameters
-
-    | Parameter | Description | Data Type | Is Required |
-    | ----------- | -------------------------- | -------- |- --- |
-    | --gwUrl | Manually specify the linkis gateway address | String | No |
-    | --authStg | Specify authentication policy | String | No |
-    | --authKey | Specify authentication key | String | No |
-    | --authVal | Specify authentication value | String | No |
-    | --userConf | Specify the configuration file location | String | No |
-
-* Parameters
-
-    | Parameter | Description | Data Type | Is Required |
-    | ----------- | -------------------------- | -------- |- --- |
-    | -engType | Engine Type | String | Yes |
-    | -runType | Execution Type | String | Yes |
-    | -code | Execution code | String | No |
-    | -codePath | Local execution code file path | String | No |
-    | -smtUsr | Specify the submitting user | String | No |
-    | -pxyUsr | Specify the execution user | String | No |
-    | -creator | Specify creator | String | No |
-    | -scriptPath | scriptPath | String | No |
-    | -outPath | Path of output result set to file | String | No |
-    | -confMap | configuration map | Map | No |
-    | -varMap | variable map for variable substitution | Map | No |
-    | -labelMap | linkis labelMap | Map | No |
-    | -sourceMap | Specify linkis sourceMap | Map | No |
-
-
-## Detailed example
-
-#### One, add cli parameters
-
-Cli parameters can be passed in manually specified, this way will overwrite the conflicting configuration items in the default configuration file
-
-```bash
-    ./bin/linkis-client -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;" -submitUser hadoop -proxyUser hadoop --gwUrl http://127.0.0.1:9001- -authStg token --authKey [tokenKey] --authVal [tokenValue]
-```
-
-#### Two, add engine initial parameters
-
-The initial parameters of the engine can be added through the `-confMap` parameter. Note that the data type of the parameter is Map. The input format of the command line is as follows:
-
-        -confMap key1=val1,key2=val2,...
-        
-For example: the following example sets startup parameters such as the yarn queue for engine startup and the number of spark executors:
-
-```bash
-   ./bin/linkis-client -engineType spark-2.4.3 -codeType sql -confMap wds.linkis.yarnqueue=q02,spark.executor.instances=3 -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  
-```
-
-Of course, these parameters can also be read in a configuration file, we will talk about it later
-
-#### Three, add tags
-
-Labels can be added through the `-labelMap` parameter. Like the `-confMap`, the type of the `-labelMap` parameter is also Map:
-
-```bash
-   /bin/linkis-client -engineType spark-2.4.3 -codeType sql -labelMap labelKey=labelVal -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  
-```
-
-#### Fourth, variable replacement
-
-Linkis-cli variable substitution is realized by `${}` symbol and `-varMap`
-
-```bash
-   ./bin/linkis-client -engineType spark-2.4.3 -codeType sql -code "select count(*) from \${key};" -varMap key=testdb.test  -submitUser hadoop -proxyUser hadoop  
-```
-
-During execution, the sql statement will be replaced with:
-
-```mysql-sql
-   select count(*) from testdb.test
-```  
-        
-Note that the escape character in `'\$'` is to prevent the parameter from being parsed in advance by linux. If `-codePath` specifies the local script mode, the escape character is not required
-
-#### Five, use user configuration
-
-1. linkis-cli supports loading user-defined configuration files, the configuration file path is specified by the `--userConf` parameter, and the configuration file needs to be in the file format of `.properties`
-        
-```bash
-   ./bin/linkis-client -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  --userConf [配置文件路径]
-``` 
-        
-        
-2. Which parameters can be configured?
-
-All parameters can be configured, for example:
-
-cli parameters:
-
-```properties
-   wds.linkis.client.common.gatewayUrl=http://127.0.0.1:9001
-   wds.linkis.client.common.authStrategy=static
-   wds.linkis.client.common.tokenKey=[tokenKey]
-   wds.linkis.client.common.tokenValue=[tokenValue]
-```
-
-parameter:
-
-```properties
-   wds.linkis.client.label.engineType=spark-2.4.3
-   wds.linkis.client.label.codeType=sql
-```
-        
-When the Map class parameters are configured, the format of the key is
-
-        [Map prefix] + [key]
-
-The Map prefix includes:
-
- - ExecutionMap prefix: wds.linkis.client.exec
- - sourceMap prefix: wds.linkis.client.source
- - ConfigurationMap prefix: wds.linkis.client.param.conf
- - runtimeMap prefix: wds.linkis.client.param.runtime
- - labelMap prefix: wds.linkis.client.label
-        
-Note:
-
-1. variableMap does not support configuration
-
-2. When there is a conflict between the configured key and the key entered in the command parameter, the priority is as follows:
-
-        Instruction Parameters> Key in Instruction Map Type Parameters> User Configuration> Default Configuration
-        
-Example:
-
-Configure engine startup parameters:
-
-```properties
-   wds.linkis.client.param.conf.spark.executor.instances=3
-   wds.linkis.client.param.conf.wds.linkis.yarnqueue=q02
-```
-        
-Configure labelMap parameters:
-
-```properties
-   wds.linkis.client.label.myLabel=label123
-```
-        
-#### Six, output result set to file
-
-Use the `-outPath` parameter to specify an output directory, linkis-cli will output the result set to a file, and each result set will automatically create a file. The output format is as follows:
-
-        task-[taskId]-result-[idx].txt
-        
-E.g:
-
-        task-906-result-1.txt
-        task-906-result-2.txt
-        task-906-result-3.txt
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/User_Manual/Linkis_Console_User_Manual.md b/Linkis-Doc-master/en_US/User_Manual/Linkis_Console_User_Manual.md
deleted file mode 100644
index 1d6704e..0000000
--- a/Linkis-Doc-master/en_US/User_Manual/Linkis_Console_User_Manual.md
+++ /dev/null
@@ -1,120 +0,0 @@
-Introduction to Computatoin Governance Console
-==============
-
-> Linkis1.0 has added a new Computatoin Governance Console page, which can provide users with an interactive UI interface for viewing the execution of Linkis tasks, custom parameter configuration, engine health status, resource surplus, etc, and then simplify user development and management efforts.
-
-Structure of Computatoin Governance Console
-==============
-
-> The Computatoin Governance Console is mainly composed of the following functional pages:
-
--[Global History](#Global_History)
-
--[Resource Management](#Resource_management)
-
--[Parameter Configuration](#Parameter_Configuration)
-
--[Global Variables](#Global_Variables)
-
--[ECM Management](#ECM_management) (Only visible to linkis computing management console administrators)
-
--[Microservice Management](#Microservice_management) (Only visible to linkis computing management console administrators)
-
--[FAQ](#FAQ)
-
-> Global history, resource management, parameter configuration, and global variables are visible to all users, while ECM management and microservice management are only visible to linkis computing management console administrators.
-
-> The administrator of the Linkis computing management desk can configure through the following parameters in linkis.properties:
-
-> `` wds.linkis.governance.station.admin=hadoop (multiple administrator usernames are separated by ‘,’)''
-
-Introduction to the functions and use of Computatoin Governance Console
-========================
-
-Global history
---------
-
-> ![](Images/Global History Interface.png)
-
-
-> The global history interface provides the user's own linkis task submission record. The execution status of each task can be displayed here, and the reason for the failure of task execution can also be queried by clicking the view button on the left side of the task
-
-> ![./media/image2.png](Images/Global History Query Button.png)
-
-
-> ![./media/image3.png](Images/task execution log of a single task.png)
-
-
-> For linkis computing management console administrators, the administrator can view the historical tasks of all users by clicking the switch administrator view on the page.
-
-> ![./media/image4.png](Images/Administrator View.png)
-
-
-Resource management
---------
-
-> In the resource management interface, the user can see the status of the engine currently started and the status of resource occupation, and can also stop the engine through the page.
-
-> ![./media/image5.png](Images/Resource Management Interface.png)
-
-
-Parameter configuration
---------
-
-> The parameter configuration interface provides the function of user-defined parameter management. The user can manage the related configuration of the engine in this interface, and the administrator can add application types and engines here.
-
-> ![./media/image6.png](Images/parameter configuration interface.png)
-
-
-> The user can expand all the configuration information in the directory by clicking on the application type at the top and then select the engine type in the application, modify the configuration information and click "Save" to take effect.
-
-> Edit catalog and new application types are only visible to the administrator. Click the edit button to delete the existing application and engine configuration (note! Deleting the application directly will delete all engine configurations under the application and cannot be restored), or add an engine, or click "New Application" to add a new application type.
-
-> ![./media/image7.png](Images/edit directory.png)
-
-
-> ![./media/image8.png](Images/New application type.png)
-
-
-Global variable
---------
-
-> In the global variable interface, users can customize variables for code writing, just click the edit button to add parameters.
-
-> ![./media/image9.png](Images/Global Variable Interface.png)
-
-
-ECM management
--------
-
-> The ECM management interface is used by the administrator to manage the ECM and all engines. This interface can view the status information of the ECM, modify the ECM label information, modify the ECM status information, and query all engine information under each ECM. And only the administrator can see, the administrator's configuration method can be viewed in the second chapter of this article.
-
-> ![./media/image10.png](Images/ECM management interface.png)
-
-
-> Click the edit button to edit the label information of the ECM (only part of the labels are allowed to be edited) and modify the status of the ECM.
-
-> ![./media/image11.png](Images/ECM editing interface.png)
-
-
-> Click the instance name of the ECM to view all engine information under the ECM.
-
-> ![](Images/Click the instance name to view engine information.png)
-
-> ![](All engine information under Images/ECM.png)
-
-> Similarly, you can stop the engine on this interface, and edit the label information of the engine.
-
-Microservice management
-----------
-
-> The microservice management interface can view all microservice information under Linkis, and this interface is only visible to the administrator. Linkis's own microservices can be viewed by clicking on the Eureka registration center. The microservices associated with linkis will be listed directly on this interface.
-
-> ![](Images/microservice management interface.png)
-
-> ![](Images/Eureka registration center.png)
-
-common problem
---------
-
-> To be added.
diff --git a/Linkis-Doc-master/en_US/User_Manual/README.md b/Linkis-Doc-master/en_US/User_Manual/README.md
deleted file mode 100644
index 442a32a..0000000
--- a/Linkis-Doc-master/en_US/User_Manual/README.md
+++ /dev/null
@@ -1,8 +0,0 @@
-# Overview
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis considered the scalability of the access method at the beginning of the design. For different access scenarios, Linkis provides front-end access and SDK access. HTTP and WebSocket interfaces are also provided on the basis of front-end interfaces. If you are interested in accessing and using Linkis, you can refer to the following documents:
-
-- [How to use Links](How_To_Use_Linkis.md)
-- [Linkis Management Console User Manual](Linkis_Console_User_Manual.md)
-- [Linkis1.0 User Manual](Linkis1.0_User_Manual.md)
-- [Linkis-Cli Usage Document](LinkisCli_Usage_document.md)
diff --git "a/Linkis-Doc-master/zh_CN/API_Documentations/Linkis\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214RestAPI\346\226\207\346\241\243.md" "b/Linkis-Doc-master/zh_CN/API_Documentations/Linkis\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214RestAPI\346\226\207\346\241\243.md"
deleted file mode 100644
index 6e5493c..0000000
--- "a/Linkis-Doc-master/zh_CN/API_Documentations/Linkis\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214RestAPI\346\226\207\346\241\243.md"
+++ /dev/null
@@ -1,171 +0,0 @@
-# Linkis 任务提交执行Rest API文档
-
-- Linkis Restful接口的返回,都遵循以下的标准返回格式:
-
-```json
-{
- "method": "",
- "status": 0,
- "message": "",
- "data": {}
-}
-```
-
-**约定**:
-
- - method:返回请求的Restful API URI,主要是 WebSocket 模式需要使用。
- - status:返回状态信息,其中:-1表示没有登录,0表示成功,1表示错误,2表示验证失败,3表示没该接口的访问权限。
- - data:返回具体的数据。
- - message:返回请求的提示信息。如果status非0时,message返回的是错误信息,其中data有可能存在stack字段,返回具体的堆栈信息。 
- 
-更多关于 Linkis Restful 接口的规范,请参考:[Linkis Restful 接口规范](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Development_Documents/Development_Specification/API.md)
-
-### 1).提交执行
-
-- 接口 `/api/rest_j/v1/entrance/execute`
-
-- 提交方式 `POST`
-
-```json
-{
-    "executeApplicationName": "hive", //引擎类型
-    "requestApplicationName": "dss", //客户端服务类型
-    "executionCode": "show tables",
-    "params": {"variable": {}, "configuration": {}},
-    "runType": "hql", //运行的脚本类型
-   "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
-}
-```
-
-- 接口 `/api/rest_j/v1/entrance/submit`
-
-- 提交方式 `POST`
-
-```json
-{
-    "executionContent": {"code": "show tables", "runType":  "sql"},
-    "params": {"variable": {}, "configuration": {}},
-    "source":  {"scriptPath": "file:///mnt/bdp/hadoop/1.hql"},
-    "labels": {
-        "engineType": "spark-2.4.3",
-        "userCreator": "hadoop-IDE"
-    }
-}
-```
-
-
-- 返回示例
-
-```json
-{
- "method": "/api/rest_j/v1/entrance/execute",
- "status": 0,
- "message": "请求执行成功",
- "data": {
-   "execID": "030418IDEhivebdpdwc010004:10087IDE_hadoop_21",
-   "taskID": "123"  
- }
-}
-```
-
-- execID是用户任务提交到 Linkis 之后,为该任务生成的唯一标识执行ID,为 String 类型,这个ID只在任务运行时有用,类似PID的概念。ExecID 的设计为`(requestApplicationName长度)(executeAppName长度)(Instance长度)${requestApplicationName}${executeApplicationName}${entranceInstance信息ip+port}${requestApplicationName}_${umUser}_${index}`
-
-- taskID 是表示用户提交task的唯一ID,这个ID由数据库自增生成,为 Long 类型
-
-
-### 2).获取状态
-
-- 接口 `/api/rest_j/v1/entrance/${execID}/status`
-
-- 提交方式 `GET`
-
-- 返回示例
-
-```json
-{
- "method": "/api/rest_j/v1/entrance/{execID}/status",
- "status": 0,
- "message": "获取状态成功",
- "data": {
-   "execID": "${execID}",
-   "status": "Running"
- }
-}
-```
-
-### 3).获取日志
-
-- 接口 `/api/rest_j/v1/entrance/${execID}/log?fromLine=${fromLine}&size=${size}`
-
-- 提交方式 `GET`
-
-- 请求参数fromLine是指从第几行开始获取,size是指该次请求获取几行日志
-
-- 返回示例,其中返回的fromLine需要作为下次请求该接口的参数
-
-```json
-{
-  "method": "/api/rest_j/v1/entrance/${execID}/log",
-  "status": 0,
-  "message": "返回日志信息",
-  "data": {
-    "execID": "${execID}",
-	"log": ["error日志","warn日志","info日志", "all日志"],
-	"fromLine": 56
-  }
-}
-```
-
-### 4).获取进度
-
-- 接口 `/api/rest_j/v1/entrance/${execID}/progress`
-
-- 提交方式 `GET`<br>
-
-- 返回示例
-
-```json
-{
-  "method": "/api/rest_j/v1/entrance/{execID}/progress",
-  "status": 0,
-  "message": "返回进度信息",
-  "data": {
-    "execID": "${execID}",
-	"progress": 0.2,
-	"progressInfo": [
-		{
-			"id": "job-1",
-			"succeedTasks": 2,
-			"failedTasks": 0,
-			"runningTasks": 5,
-			"totalTasks": 10
-		},
-		{
-			"id": "job-2",
-			"succeedTasks": 5,
-			"failedTasks": 0,
-			"runningTasks": 5,
-			"totalTasks": 10
-		}
-	]
-  }
-}
-```
-
-### 5).kill任务
-
-- 接口 `/api/rest_j/v1/entrance/${execID}/kill`
-
-- 提交方式 `GET`
-
-```json
-{
- "method": "/api/rest_j/v1/entrance/{execID}/kill",
- "status": 0,
- "message": "OK",
- "data": {
-   "execID":"${execID}"
-  }
-}
-```
-
diff --git a/Linkis-Doc-master/zh_CN/API_Documentations/Login_API.md b/Linkis-Doc-master/zh_CN/API_Documentations/Login_API.md
deleted file mode 100644
index 01c896f..0000000
--- a/Linkis-Doc-master/zh_CN/API_Documentations/Login_API.md
+++ /dev/null
@@ -1,131 +0,0 @@
-# 登录文档
-
-## 1.对接LDAP服务
-
-进入/conf目录,执行命令:
-
-```bash
-    vim linkis-mg-gateway.properties
-```    
-
-添加LDAP相关配置:
-```bash
-wds.linkis.ldap.proxy.url=ldap://127.0.0.1:389/ # 您的LDAP服务URL
-wds.linkis.ldap.proxy.baseDN=dc=webank,dc=com # 您的LDAP服务的配置    
-```    
-    
-## 2.如何打开测试模式,实现免登录
-
-进入/conf目录,执行命令:
-
-```bash
-     vim linkis-mg-gateway.properties
-```
-    
-    
-将测试模式打开,参数如下:
-
-```shell
-    wds.linkis.test.mode=true   # 打开测试模式
-    wds.linkis.test.user=hadoop  # 指定测试模式下,所有请求都代理给哪个用户
-```
-
-## 3.登录接口汇总
-
-我们提供以下几个与登录相关的接口:
-
- - [登录](#1登录)
-
- - [登出](#2登出)
-
- - [心跳](#3心跳)
- 
-
-## 4.接口详解
-
-- Linkis Restful接口的返回,都遵循以下的标准返回格式:
-
-```json
-{
- "method": "",
- "status": 0,
- "message": "",
- "data": {}
-}
-```
-
-**约定**:
-
- - method:返回请求的Restful API URI,主要是 WebSocket 模式需要使用。
- - status:返回状态信息,其中:-1表示没有登录,0表示成功,1表示错误,2表示验证失败,3表示没该接口的访问权限。
- - data:返回具体的数据。
- - message:返回请求的提示信息。如果status非0时,message返回的是错误信息,其中data有可能存在stack字段,返回具体的堆栈信息。 
- 
-更多关于 Linkis Restful 接口的规范,请参考:[Linkis Restful 接口规范](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Development_Documents/Development_Specification/API.md)
-
-### 1).登录
-
-- 接口 `/api/rest_j/v1/user/login`
-
-- 提交方式 `POST`
-
-```json
-      {
-        "userName": "",
-        "password": ""
-      }
-```
-
-- 返回示例
-
-```json
-    {
-        "method": null,
-        "status": 0,
-        "message": "login successful(登录成功)!",
-        "data": {
-            "isAdmin": false,
-            "userName": ""
-        }
-     }
-```
-
-其中:
-
- - isAdmin: Linkis只有admin用户和非admin用户,admin用户的唯一特权,就是支持在Linkis管理台查看所有用户的历史任务。
-
-### 2).登出
-
-- 接口 `/api/rest_j/v1/user/logout`
-
-- 提交方式 `POST`
-
-  无参数
-
-- 返回示例
-
-```json
-    {
-        "method": "/api/rest_j/v1/user/logout",
-        "status": 0,
-        "message": "退出登录成功!"
-    }
-```
-
-### 3).心跳
-
-- 接口 `/api/rest_j/v1/user/heartbeat`
-
-- 提交方式 `POST`
-
-  无参数
-
-- 返回示例
-
-```json
-    {
-         "method": "/api/rest_j/v1/user/heartbeat",
-         "status": 0,
-         "message": "维系心跳成功!"
-    }
-```
diff --git a/Linkis-Doc-master/zh_CN/API_Documentations/README.md b/Linkis-Doc-master/zh_CN/API_Documentations/README.md
deleted file mode 100644
index 9f952b6..0000000
--- a/Linkis-Doc-master/zh_CN/API_Documentations/README.md
+++ /dev/null
@@ -1,8 +0,0 @@
-## 1. 文档说明
-Linkis1.0 在Linkix0.x版本的基础上进行了重构优化,同时也兼容了0.x的接口,但是为了防止在使用1.0版本时存在兼容性问题,需要您仔细阅读以下文档:
-
-1. 使用Linkis1.0定制化开发时,需要使用到Linkis的权限认证接口,请仔细阅读 [登录API文档](Login_API.md)。
-
-2. Linkis1.0提供JDBC的接口,需要使用JDBC的方式接入Linkis,请仔细阅读[任务提交执行JDBC API文档](任务提交执行JDBC_API文档.md)。
-
-3. Linkis1.0提供了Rest接口,如果需要在Linkis的基础上开发上层应用,请仔细阅读[任务提交执行Rest API文档](Linkis任务提交执行RestAPI文档.md)。
\ No newline at end of file
diff --git "a/Linkis-Doc-master/zh_CN/API_Documentations/\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214JDBC_API\346\226\207\346\241\243.md" "b/Linkis-Doc-master/zh_CN/API_Documentations/\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214JDBC_API\346\226\207\346\241\243.md"
deleted file mode 100644
index 1e365be..0000000
--- "a/Linkis-Doc-master/zh_CN/API_Documentations/\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214JDBC_API\346\226\207\346\241\243.md"
+++ /dev/null
@@ -1,46 +0,0 @@
-# 任务提交执行JDBC API文档
-
-### 一、引入依赖模块:
-第一种方式在pom里面依赖JDBC模块:
-```xml
-<dependency>
-    <groupId>com.webank.wedatasphere.linkis</groupId>
-    <artifactId>linkis-ujes-jdbc</artifactId>
-    <version>${linkis.version}</version>
- </dependency>
-```
-**注意:** 该模块还没有deploy到中央仓库,需要在ujes/jdbc目录里面执行`mvn install -Dmaven.test.skip=true`进行本地安装。
-
-**第二种方式通过打包和编译:**
-1. 在Linkis项目中进入到ujes/jdbc目录然后在终端输入指令进行打包`mvn assembly:assembly -Dmaven.test.skip=true`
-该打包指令会跳过单元测试的运行和测试代码的编译,并将JDBC模块需要的依赖一并打包进Jar包之中。
-2. 打包完成后在JDBC的target目录下会生成两个Jar包,Jar包名称中包含dependencies字样的那个就是我们需要的Jar包
-
-### 二、建立测试类:
-建立Java的测试类LinkisClientImplTestJ,具体接口含义可以见注释:
-```java
- public static void main(String[] args) throws SQLException, ClassNotFoundException {
-
-        //1. 加载驱动类:com.webank.wedatasphere.linkis.ujes.jdbc.UJESSQLDriver
-        Class.forName("com.webank.wedatasphere.linkis.ujes.jdbc.UJESSQLDriver");
-
-        //2. 获得连接:jdbc:linkis://gatewayIP:gatewayPort   帐号和密码对应前端的帐号密码
-        Connection connection =  DriverManager.getConnection("jdbc:linkis://127.0.0.1:9001","username","password");
-
-        //3. 创建statement 和执行查询
-        Statement st= connection.createStatement();
-        ResultSet rs=st.executeQuery("show tables");
-        //4.处理数据库的返回结果(使用ResultSet类)
-        while (rs.next()) {
-            ResultSetMetaData metaData = rs.getMetaData();
-            for (int i = 1; i <= metaData.getColumnCount(); i++) {
-                System.out.print(metaData.getColumnName(i) + ":" +metaData.getColumnTypeName(i)+": "+ rs.getObject(i) + "    ");
-            }
-            System.out.println();
-        }
-        //关闭资源
-        rs.close();
-        st.close();
-        connection.close();
-    }
-```
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/messagescheduler.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/messagescheduler.md
deleted file mode 100644
index 4ed47a9..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/messagescheduler.md
+++ /dev/null
@@ -1,15 +0,0 @@
-# Linkis-Message-Scheduler
-## 1. 概述
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis-RPC可以实现微服务之间的通信,为了简化RPC的使用方式,Linkis提供Message-Scheduler模块,通过如@Receiver注解的方式的解析识别与调用,同时,也统一了RPC和Restful接口的使用方式,具有更好的可拓展性。
-## 2. 架构说明
-## 2.1. 架构设计图
-![模块设计图](./../../Images/Architecture/Commons/linkis-message-scheduler.png)
-## 2.2. 模块说明
-* ServiceParser:解析Service模块的(Object)对象,同时把@Receiver注解的方法封装到ServiceMethod对象中。
-* ServiceRegistry:注册对应的Service模块,将Service解析后的ServiceMethod存储在Map容器中。
-* ImplicitParser:将Implicit模块的对象进行解析,使用@Implicit标注的方法会被封装到ImplicitMethod对象中。
-* ImplicitRegistry:注册对应的Implicit模块,将解析后的ImplicitMethod存储在一个Map容器中。
-* Converter:启动扫描RequestMethod的非接口非抽象的子类,并存储在Map中,解析Restful并匹配相关的RequestProtocol。
-* Publisher:实现发布调度功能,在Registry中找出匹配RequestProtocol的ServiceMethod,并封装为Job进行提交调度。
-* Scheduler:调度实现,使用Linkis-Sceduler执行Job,返回MessageJob对象。
-* TxManager:完成事务管理,对Job执行进行事务管理,在Job执行结束后判断是否进行Commit或者Rollback。
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/rpc.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/rpc.md
deleted file mode 100644
index c89c578..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/rpc.md
+++ /dev/null
@@ -1,17 +0,0 @@
-# Linkis-RPC
-## 1. 概述
-基于Feign的微服务之间HTTP接口的调用,只能满足简单的A微服务实例根据简单的规则随机选择B微服务之中的某个服务实例,而这个B微服务实例如果想异步回传信息给调用方,是根本无法实现的。
-同时,由于Feign只支持简单的服务选取规则,无法做到将请求转发给指定的微服务实例,无法做到将一个请求广播给接收方微服务的所有实例。
-
-## 2. 架构说明
-## 2.1. 架构设计图
-![Linkis RPC架构图](./../../Images/Architecture/Commons/linkis-rpc.png)
-## 2.2. 模块说明
-主要模块的功能介绍如下:
-* Eureka:服务注册中心,用户管理服务,服务发现。
-* Sender发送器:服务请求接口,发送端使用Sender向接收端请求服务。
-* Receiver接收器:服务请求接收相应接口,接收端通过该接口响应服务。
-* Interceptor拦截器:Sender发送器会将使用者的请求传递给拦截器。拦截器拦截请求,对请求做额外的功能性处理,分别是广播拦截器用于对请求广播操作、重试拦截器用于对失败请求重试处理、缓存拦截器用于简单不变的请求读取缓存处理、和提供默认实现的默认拦截器。
-* Decoder,Encoder:用于请求的编码和解码。
-* Feign:是一个http请求调用的轻量级框架,声明式WebService客户端程序,用于Linkis-RPC底层通信。
-* Listener:监听模块,主要用于监听广播请求。
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md
deleted file mode 100644
index 45389b1..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md
+++ /dev/null
@@ -1,98 +0,0 @@
-EngineConn架构设计
-==================
-
-EngineConn:引擎连接器,为其他微服务模块提供统一配置管理、上下文服务、物理库、数据源管理、微服务管理和历史任务查询等功能的模块。
-
-一、EngineConn架构图
-
-![EngineConn](../../../Images/Architecture/EngineConn/engineconn-01.png)
-
-二级模块介绍:
-==============
-
-linkis-computation-engineconn交互式引擎连接器
----------------------------------------------
-
-提供交互式计算任务的能力。
-
-| 核心类               | 核心功能                                                   |
-|----------------------|------------------------------------------------------------|
-| EngineConnTask       | 定义了提交给EngineConn的交互式计算任务                     |
-| ComputationExecutor  | 定义了交互式Executor,具备状态查询、任务kill等交互式能力。 |
-| TaskExecutionService | 提供对交互式计算任务的管理功能                             |
-
-linkis-engineconn-common引擎连接器的通用模块
---------------------------------------------
-
-1.  定义了引擎连接器中最基础的实体类和接口。EngineConn是用于创建一个底层计算存储引擎的连接会话Session,包含引擎与具体集群的会话信息,是与具体引擎通信的client。
-
-| 核心Service           | 核心功能                                                             |
-|-----------------------|----------------------------------------------------------------------|
-| EngineCreationContext | 包含了EngineConn在启动期间的上下文信息                               |
-| EngineConn            | 包含了EngineConn的具体信息,如类型、与层计算存储引擎的具体连接信息等 |
-| EngineExecution       | 提供Executor的创建逻辑                                               |
-| EngineConnHook        | 定义引擎启动各个阶段前后的操作                                       |
-
-linkis-engineconn-core引擎连接器的核心逻辑
-------------------------------------------
-
-定义了EngineConn的核心逻辑涉及的接口。
-
-| 核心类            | 核心功能                           |
-|-------------------|------------------------------------|
-| EngineConnManager | 提供创建、获取EngineConn的相关接口 |
-| ExecutorManager   | 提供创建、获取Executor的相关接口   |
-| ShutdownHook      | 定义引擎关闭阶段的操作             |
-
-linkis-engineconn-launch引擎连接器启动模块
-------------------------------------------
-
-定义了如何启动EngineConn的逻辑。
-
-| 核心类           | 核心功能                 |
-|------------------|--------------------------|
-| EngineConnServer | EngineConn微服务的启动类 |
-
-linkis-executor-core执行器的核心逻辑
-------------------------------------
-
->   定义了执行器相关的核心类。执行器是真正的计算场景执行器,负责将用户代码提交给EngineConn。
-
-| 核心类                     | 核心功能                                                   |
-|----------------------------|------------------------------------------------------------|
-| Executor                   | 是实际的计算逻辑执行单元,并提供对引擎各种能力的顶层抽象。 |
-| EngineConnAsyncEvent       | 定义了EngineConn相关的异步事件                             |
-| EngineConnSyncEvent        | 定义了EngineConn相关的同步事件                             |
-| EngineConnAsyncListener    | 定义了EngineConn相关异步事件监听器                         |
-| EngineConnSyncListener     | 定义了EngineConn相关同步事件监听器                         |
-| EngineConnAsyncListenerBus | 定义了EngineConn异步事件的监听器总线                       |
-| EngineConnSyncListenerBus  | 定义了EngineConn同步事件的监听器总线                       |
-| ExecutorListenerBusContext | 定义了EngineConn事件监听器的上下文                         |
-| LabelService               | 提供标签上报功能                                           |
-| ManagerService             | 提供与LinkisManager进行信息传递的功能                      |
-
-linkis-callback-service回调逻辑
--------------------------------
-
-| 核心类             | 核心功能                 |
-|--------------------|--------------------------|
-| EngineConnCallback | 定义EngineConn的回调逻辑 |
-
-linkis-accessible-executor能够被访问的执行器
---------------------------------------------
-
-能够被访问的Executor。可以通过RPC请求与它交互,从而获取它的状态、负载、并发等基础指标Metrics数据。
-
-| 核心类                   | 核心功能                                        |
-|--------------------------|-------------------------------------------------|
-| LogCache                 | 提供日志缓存的功能                              |
-| AccessibleExecutor       | 能够被访问的Executor,可以通过RPC请求与它交互。 |
-| NodeHealthyInfoManager   | 管理Executor的健康信息                          |
-| NodeHeartbeatMsgManager  | 管理Executor的心跳信息                          |
-| NodeOverLoadInfoManager  | 管理Executor的负载信息                          |
-| Listener                 | 提供与Executor相关的事件以及对应的监听器定义    |
-| EngineConnTimedLock      | 定义Executor级别的锁                            |
-| AccessibleService        | 提供Executor的启停、状态获取功能                |
-| ExecutorHeartbeatService | 提供Executor的心跳相关功能                      |
-| LockService              | 提供锁管理功能                                  |
-| LogService               | 提供日志管理功能                                |
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM\346\236\266\346\236\204\345\233\276.png" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM\346\236\266\346\236\204\345\233\276.png"
deleted file mode 100644
index cc83842..0000000
Binary files "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM\346\236\266\346\236\204\345\233\276.png" and /dev/null differ
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/\345\210\233\345\273\272EngineConn\350\257\267\346\261\202\346\265\201\347\250\213.png" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/\345\210\233\345\273\272EngineConn\350\257\267\346\261\202\346\265\201\347\250\213.png"
deleted file mode 100644
index 303f37a..0000000
Binary files "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/\345\210\233\345\273\272EngineConn\350\257\267\346\261\202\346\265\201\347\250\213.png" and /dev/null differ
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md
deleted file mode 100644
index 2fa0aef..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md
+++ /dev/null
@@ -1,49 +0,0 @@
-EngineConnManager架构设计
--------------------------
-
-EngineConnManager(ECM):EngineConn的管理器,提供引擎的生命周期管理,同时向RM汇报负载信息和自身的健康状况。
-
-### 一、ECM架构
-
-![](Images/ECM架构图.png)
-
-### 二、二级模块介绍
-
-**Linkis-engineconn-linux-launch**
-
-引擎启动器,核心类为LinuxProcessEngineConnLauch,用于提供执行命令的指令。
-
-**Linkis-engineconn-manager-core**
-
-ECM的核心模块,包含ECM健康上报、EngineConn健康上报功能的顶层接口,定义了ECM服务的相关指标,以及构造EngineConn进程的核心方法。
-
-| 核心顶层接口/类     | 核心功能                                 |
-|---------------------|------------------------------------------|
-| EngineConn          | 定义了EngineConn的属性,包含的方法和参数 |
-| EngineConnLaunch    | 定义了EngineConn的启动方法和停止方法     |
-| ECMEvent            | 定义了ECM相关事件                        |
-| ECMEventListener    | 定义了ECM相关事件监听器                  |
-| ECMEventListenerBus | 定义了ECM的监听器总线                    |
-| ECMMetrics          | 定义了ECM的指标信息                      |
-| ECMHealthReport     | 定义了ECM的健康上报信息                  |
-| NodeHealthReport    | 定义了节点的健康上报信息                 |
-
-**Linkis-engineconn-manager-server**
-
-ECM的服务端,定义了ECM健康信息处理服务、ECM指标信息处理服务、ECM注册服务、EngineConn启动服务、EngineConn停止服务、EngineConn回调服务等顶层接口和实现类,主要用于ECM对自己和EngineConn的生命周期管理以及健康信息上报、发送心跳等。
-
-模块中的核心Service和功能简介如下:
-
-| 核心service                     | 核心功能                                        |
-|---------------------------------|-------------------------------------------------|
-| EngineConnLaunchService         | 包含生成EngineConn和启动进程的核心方法          |
-| BmlResourceLocallizationService | 用于将BML的引擎相关资源下载并生成本地化文件目录 |
-| ECMHealthService                | 向AM定时上报自身的健康心跳                      |
-| ECMMetricsService               | 向AM定时上报自身的指标状况                      |
-| EngineConnKillSerivce           | 提供停止引擎的相关功能                          |
-| EngineConnListService           | 提供缓存和管理引擎的相关功能                    |
-| EngineConnCallBackService       | 提供回调引擎的功能                              |
-
-ECM构建EngineConn启动流程:
-
-![](Images/创建EngineConn请求流程.png)
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md
deleted file mode 100644
index 798f535..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md
+++ /dev/null
@@ -1,71 +0,0 @@
-EngineConnPlugin(ECP)架构设计
-===============================
-
-引擎连接器插件是一种能够动态加载引擎连接器并减少版本冲突发生的实现,具有方便扩展、快速刷新、选择加载的特性。为了能让开发用户自由扩展Linkis的Engine引擎,并动态加载引擎依赖避免版本冲突,设计研发了EngineConnPlugin,允许以实现既定的插件化接口的方式引入新引擎到计算中间件的执行生命周期里,
-插件化接口对引擎的定义做了拆解,包括参数初始化、分配引擎资源,构建引擎连接以及设定引擎默认标签。
-
-一、ECP架构图
-
-![](../../../Images/Architecture/linkis-engineConnPlugin-01.png)
-
-二级模块介绍:
-==============
-
-EngineConn-Plugin-Server
-------------------------
-
-引擎连接器插件服务是对外提供注册插件、管理插件,以及插件资源构建的入口服务。成功注册加载的引擎插件会包含资源分配和启动参数配置的逻辑,在引擎初始化过程中,EngineConn
-Manager等其他服务通过RPC请求调用Plugin Server里对应插件的逻辑。
-
-| 核心类                           | 核心功能                              |
-|----------------------------------|---------------------------------------|
-| EngineConnLaunchService          | 负责构建引擎连接器启动请求            |
-| EngineConnResourceFactoryService | 负责生成引擎资源                      |
-| EngineConnResourceService        | 负责从BML下载引擎连接器使用的资源文件 |
-
-
-EngineConn-Plugin-Loader 引擎连接器插件加载器
----------------------------------------
-
-引擎连接器插件加载器是用来根据请求参数动态加载引擎连接器插件的加载器,并具有缓存的特性。具体加载流程主要由两部分组成:1)插件资源例如主程序包和程序依赖包等加载到本地(未开放)。2)插件资源从本地动态加载入服务进程环境中,例如通过类加载器加载入JVM虚拟机。
-
-| 核心类                          | 核心功能                                     |
-|---------------------------------|----------------------------------------------|
-| EngineConnPluginsResourceLoader | 加载引擎连接器插件资源                       |
-| EngineConnPluginsLoader         | 加载引擎连接器插件实例,或者从缓存加载已有的 |
-| EngineConnPluginClassLoader     | 动态从jar中实例化引擎连接器实例              |
-
-EngineConn-Plugin-Cache 引擎插件缓存模组
-----------------------------------------
-
-引擎连接器插件缓存是专门用来缓存已经加载的引擎连接器的缓存服务,并支持读取、更新、移除的能力。已经加载进服务进程的插件会被连同其类加载器一起缓存起来,避免多次加载影响效率;同时缓存模组会定时通知加载器去更新插件资源,如果发现有变动,会重新加载并自动刷新缓存。
-
-| 核心类                      | 核心功能                     |
-|-----------------------------|------------------------------|
-| EngineConnPluginCache       | 缓存已经加载的引擎连接器实例 |
-| RefreshPluginCacheContainer | 定时刷新缓存的引擎连接器     |
-
-EngineConn-Plugin-Core:引擎连接器插件核心模组
----------------------------------------------
-
-引擎连接器插件核心模块是引擎连接器插件的核心模块。包含引擎插件基本功能实现,如引擎连接器启动命令构建,引擎资源工厂构建和引擎连接器插件核心接口实现。
-
-| 核心类                  | 核心功能                                                 |
-|-------------------------|----------------------------------------------------------|
-| EngineConnLaunchBuilder | 构建引擎连接器启动请求                                   |
-| EngineConnFactory       | 创建引擎连接器                                           |
-| EngineConnPlugin        | 引擎连接器插件实现接口,包括资源,命令,实例的构建方法。 |
-| EngineResourceFactory   | 引擎资源的创建工厂                                       |
-
-EngineConn-Plugins:引擎连接插件集合
------------------------------------
-
-引擎连接插件集合是用来放置已经基于我们定义的插件接口实现的默认引擎连接器插件库。提供了默认引擎连接器实现,如jdbc、spark、python、shell等。用户可以基于自己的需求参考已经实现的案例,实现更多的引擎连接器。
-
-| 核心类              | 核心功能         |
-|---------------------|------------------|
-| engineplugin-jdbc   | jdbc引擎连接器   |
-| engineplugin-shell  | shell引擎连接器  |
-| engineplugin-spark  | spark引擎连接器  |
-| engineplugin-python | python引擎连接器 |
-
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/Entrance/Entrance.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/Entrance/Entrance.md
deleted file mode 100644
index 38d3e56..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/Entrance/Entrance.md
+++ /dev/null
@@ -1,26 +0,0 @@
-Entrance架构设计
-================
-
-Links任务提交入口是用来负责计算任务的接收、调度、转发执行请求、生命周期管理的服务,并且能把计算结果、日志、进度返回给调用方,是从Linkis0.X的Entrance拆分出来的原生能力。
-
-一、Entrance架构图
-
-![](../../../Images/Architecture/linkis-entrance-01.png)
-
-**二级模块介绍:**
-
-EntranceServer
---------------
-
-EntranceServer计算任务提交入口服务是Entrance的核心服务,负责Linkis执行任务的接收、调度、执行状态跟踪、作业生命周期管理等。主要实现了把任务执行请求转成可调度的Job,调度、申请Executor执行,Job状态管理,结果集管理,日志管理等。
-
-| 核心类                  | 核心功能                                                                                                                                           |
-|-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------|
-| EntranceInterceptor     | Entrance拦截器用来对传入参数task进行信息的补充,使得这个task的内容更加完整, 补充的信息包括: 数据库信息补充、自定义变量替换、代码检查、limit限制等 |
-| EntranceParser          | Entrance解析器用来把请求参数Map解析成Task,也可以将Task转成可调度的Job,或者把Job转成可存储的Task。                                                  |
-| EntranceExecutorManager | Entrance执行器管理为EntranceJob的执行创建Executor,并维护Job和Executor的关系,且支持Job请求的标签能力                                               |
-| PersistenceManager      | 持久化管理负责作业相关的持久化操作,如结果集路径、作业状态变化、进度等存储到数据库。                                                               |
-| ResultSetEngine         | 结果集引擎负责作业运行后的结果集存储,以文件的形式保存到HDFS或者本地存储目录。                                                                     |
-| LogManager              | 日志管理负责作业日志的存储并对接日志错误码管理。                                                                                                   |
-| Scheduler               | 作业调度器负责所有Job的调度执行,主要通过调度作业队列实现。                                                                                        |
-|                         |                                                                                                                                                    |
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisClient/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisClient/README.md
deleted file mode 100644
index 7d36f0e..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisClient/README.md
+++ /dev/null
@@ -1,35 +0,0 @@
-## Linkis-Client架构设计
-
-为用户提供向Linkis提交执行任务的轻量级客户端。
-
-#### Linkis-Client架构图
-
-![img](./../../../Images/Architecture/linkis-client-01.png)
-
-
-
-#### 二级模块介绍
-
-##### Linkis-Computation-Client
-
-以SDK的形式为用户提供向Linkis提交执行任务的接口。
-
-| 核心类     | 核心功能                                         |
-| ---------- | ------------------------------------------------ |
-| Action     | 定义了请求的属性,包含的方法和参数               |
-| Result     | 定义了返回结果的属性,包含的方法和参数           |
-| UJESClient | 负责请求的提交,执行,状态、结果和相关参数的获取 |
-
- 
-
-#####  Linkis-Cli
-
-以shell命令端的形式为用户提供向Linkis提交执行任务的方式。
-
-| 核心类      | 核心功能                                                     |
-| ----------- | ------------------------------------------------------------ |
-| Common      | 定义了指令模板父类、指令解析实体类、任务提交执行各环节的父类和接口 |
-| Core        | 负责解析输入、任务执行和定义输出方式                         |
-| Application | 调用linkis-computation-client执行任务,并实时拉取日志和最终结果 |
-
- 
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md
deleted file mode 100644
index c8fba23..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md
+++ /dev/null
@@ -1,45 +0,0 @@
-## 背景
-针对旧版本Linkis的Entrance模块负责太多的职责,对Engine的管理能力较弱,且不易于后续的扩展,新抽出了AppManager模块,完成
-以下职责:
-1. 新增AM模块将Entrance之前做的管理Engine的功能移动到AM模块
-2. AM需要支持操作Engine,包括:新增、复用、回收、预热、切换等功能
-3. 需要对接Manager模块对外提供Engine的管理功能:包括Engine状态维护、引擎列表维护、引擎信息等
-4. AM需要管理EM服务,需要完成EM的注册并将资源注册转发给RM进行EM的资源注册
-5. AM需要对接Label模块,包括EM/Engine的增删需要通知标签管理器进行标签更新
-6. AM另外需要对接标签模块进行标签解析,并需要通过一系列标签获取一些列打好分的serverInstance列表(EM和Engine怎么区分,1、标签完全不一样)
-7. 需要对外提供基础接口:包括引擎和引擎管理器的增删改,提供metric查询等
-
-## 架构图
-
-![](../../../Images/Architecture/AppManager-03.png)
-
-如上图所示:AM在LinkisMaster中属于AppManager模块,作为一个Service提供服务
-
-新引擎申请流程图:
-![](../../../Images/Architecture/AppManager-02.png)
-
-
-从上面的引擎生命周期流程图可知,Entrance已经不在做Engine的管理工作,engine的启动和管理都由AM控制。
-
-## 架构说明:
-
-AppManager主要包含了引擎服务和EM服务:
-引擎服务包含了所有和引擎EngineConn相关的操作,如引擎创建、引擎复用、引擎切换、引擎回收、引擎停止、引擎销毁等。
-EM服务负责所有EngineConnManager的信息管理,可以在线上对ECM进行服务管理,包括标签修改,暂停ECM服务,获取ECM实例信息,获取ECM运行的引擎信息,kill掉ECM操作,还可以根据EM Node的信息查询所有的EngineNode,也支持按用户查找,保存了EM Node的负载信息、节点健康信息、资源使用信息等。
-新的EngineConnManager和EngineConn都支持标签管理,引擎的类型也增加了离线、流式、交互式支持。
-
-引擎创建:专门负责LinkisManager服务的新建引擎功能,引擎启动模块完全负责一个新引擎的创建,包括获取ECM标签集合、资源申请、获得引擎启动命令,通知ECM新建引擎,更新引擎列表等。
-CreateEngienRequest->RPC/Rest -> MasterEventHandler ->CreateEngineService ->
-->LabelContext/EnginePlugin/RMResourcevice->(RcycleEngineService)EngineNodeManager->EMNodeManager->sender.ask(EngineLaunchRequest)->EngineManager服务->EngineNodeManager->EngineLocker->Engine->EngineNodeManager->EngineFactory=&gt;EngineService=&gt;ServerInstance
-在创建引擎是存在和RM交互的部分,EnginePlugin应该需要通过Lables返回具体的资源类型,然后AM向RM发送资源请求
-
-引擎复用:为了减少引擎启动所耗费的时间和资源,引擎使用必须优先考虑复用原则,复用一般是指复用用户已经创建好的引擎,引擎复用模块负责提供可复用引擎集合,选举并锁定引擎后开始使用,或者返回没有可以复用的引擎。
-ReuseEngienRequest->RPC/Rest -> MasterEventHandler ->ReuseEngineService ->
-->abelContext->EngineNodeManager->EngineSelector->EngineLocker->Engine->EngineNodeManager->EngineReuser->EngineService=&gt;ServerInstance
-
-引擎切换:主要是指对已有引擎进行标签切换,例如创建引擎的时候是由Creator1创建的,现在可以通过引擎切换改成Creator2。这个时候就可以允许当前引擎接收标签为Creator2的任务了。
-SwitchEngienRequest->RPC/Rest -> MasterEventHandler ->SwitchEngineService ->LabelContext/EnginePlugin/RMResourcevice->EngineNodeManager->EngineLocker->Engine->EngineNodeManager->EngineReuser->EngineService=&gt;ServerInstance
-
-引擎管理器:引擎管理负责管理所有引擎的基本信息、元数据信息
-
-
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md
deleted file mode 100644
index 7c21f08..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md
+++ /dev/null
@@ -1,40 +0,0 @@
-## LabelManager 架构设计
-
-#### 简述
-LabelManager是Linkis中对上层应用提供标签服务的功能模组,运用标签技术管理集群资源分配、服务节点选举、用户权限匹配以及网关路由转发;包含支持各种自定义Label标签的泛化解析处理工具,以及通用的标签匹配评分器。
-
-### 整体架构示意
-
-![整体架构示意图](../../../Images/Architecture/LabelManager/label_manager_global.png)  
-
-#### 架构说明
-- LabelBuilder: 承担着标签解析的工作,从输入的标签类型、关键字或者字符数值中解析得到具体的标签实体,有默认的泛化实现类也可做自定义扩展。
-- LabelEntities: 指代标签实体集合,有且包含集群标签,配置标签,引擎标签,节点标签,路由标签,搜索标签等。
-- NodeLabelService: 实例/节点与标签的关联服务接口类,定义对两者关联关系的增删改查以及根据标签匹配实例/节点的接口方法。
-- UserLabelService: 声明用户与标签的关联操作。
-- ResourceLabelService: 声明集群资源与标签的关联操作,涉及到对组合标签的资源管理,清理或设置标签关联的资源数值。
-- NodeLabelScorer: 节点标签评分器,对应不同的标签匹配算法的实现,使用评分表示节点的标签匹配度。
-
-### 一. LabelBuilder解析流程
-以泛化标签解析类GenericLabelBuilder为例,阐明整体流程:  
-![泛化标签解析流程](../../../Images/Architecture/LabelManager/label_manager_builder.png)  
-标签解析/构建的流程概括包含几步:  
-1. 根据输入选择要构建解析的合适标签类。
-2. 根据标签类的定义信息,递归解析泛型结构,得到具体的标签值类型。
-3. 转化输入值对象到标签值类型,运用隐式转化或正反解析框架。
-4. 根据1-3的返回,实例化标签,并根据不同的标签类进行一些后置操作。
-
-### 二. NodeLabelScorer打分流程
-为了根据Linkis用户执行请求中附带的标签列表挑选合适的引擎节点,需要对符合的引擎列表做择优,量化为引擎节点的标签匹配度即评分。  
-在标签定义里,每个标签都有feature特征值,分别为CORE,SUITABLE,PRIORITIZED,OPTIONAL,每个特征值都有一个boost值,相当于权重和激励值,
-同时有些特征例CORE和SUITABLE为必须唯一特征即在匹配过程中需做强过滤,且一个节点只能分别关联一个CORE/SUITABLE标签。  
-根据现有标签,节点,请求附带标签三者之间的关系,可以绘制出如下示意图:  
-![标签打分](../../../Images/Architecture/LabelManager/label_manager_scorer.png)  
-
-自带的默认评分逻辑过程应大体包含以下几点步骤:  
-1. 方法的输入应该为两组网络关系列表,分别是`Label -> Node` 和 `Node -> Label`, 其中`Node -> Label`关系里的Node节点必须具有请求里涉及到所有CORE以及SUITABLE特征的标签,这些节点也称为备选节点。
-2. 第一步遍历计算`Node -> Label`关系列表,遍历每个节点关联的标签Label,这一步先给标签打分,如果标签不是请求中附带的标签,打分为0,
-否则打分为: (基本分/该标签对应特征值在请求中的出现次数) * 对应特征值的激励值,其中基本分默认为1,节点的初始分为相关联的标签打分的总和;其中因为CORE/SUITABLE类型标签为必须唯一标签,出现次数恒定为1。
-3. 得到节点的初始分后,第二步遍历计算`Label -> Node`关系,由于第一步中忽略了非请求附带标签对评分的作用,但无关标签比重确实会对评分造成影响,对应这类的标签统一打上UNKNOWN的特征,同样该特征也有相对应的激励值;
-我们设定无关标签关联的备选节点占总关联节点的比重越高,对评分的影响越显著,以此可以对第一步得出的节点初始分做进一步累加。
-4. 对得到的备选节点的分数做标准差归一化,并排序。
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md
deleted file mode 100644
index 8670a45..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md
+++ /dev/null
@@ -1,74 +0,0 @@
-LinkisManager架构设计
-====================
-
-LinkisManager作为Linkis的一个独立微服务,对外提供了AppManager(应用管理)、ResourceManager(资源管理)、LabelManager(标签管理)的能力,能够支持多活部署,具备高可用、易扩展的特性。
-
-## 一. 架构图
-
-![01](../../../Images/Architecture/LinkisManager/LinkisManager-01.png)
-
-### 名词解释
-- EngineConnManager(ECM): 引擎管理器,用于启动和管理引擎
-- EngineConn(EC):引擎连接器,用于连接底层计算引擎
-- ResourceManager(RM):资源管理器,用于管理节点资源
-
-## 二. 二级模块介绍
-
-### 1. 应用管理模块 linkis-application-manager
-
-AppManager用于引擎的统一调度和管理
-
-| 核心接口/类 | 主要功能 |
-|------------|--------|
-|EMInfoService | 定义了EngineConnManager信息查询、修改功能 |
-|EMRegisterService| 定义了EngineConnManager注册功能 |
-|EMEngineService | 定义了EngineConnManager对EngineConn的创建、查询、关闭功能 |
-|EngineAskEngineService | 定义了查询EngineConn的功能 |
-|EngineConnStatusCallbackService | 定义了处理EngineConn状态回调的功能 |
-|EngineCreateService | 定义了创建EngineConn的功能 |
-|EngineInfoService | 定义了EngineConn查询功能 |
-|EngineKillService | 定义了EngineConn的停止功能 |
-|EngineRecycleService | 定义了EngineConn的回收功能 |
-|EngineReuseService | 定义了EngineConn的复用功能 |
-|EngineStopService | 定义了EngineConn的自毁功能 |
-|EngineSwitchService | 定义了引擎切换功能 |
-|AMHeartbeatService | 提供了EngineConnManager和EngineConn节点心跳处理功能 |
-
-
-通过AppManager申请引擎流程如下:
-![](../../../Images/Architecture/LinkisManager/AppManager-01.png)
-
-  
-### 2. 标签管理模块 linkis-label-manager
-
-LabelManager提供标签管理和解析能力
-
-| 核心接口/类 | 主要功能 |
-|------------|--------|
-|LabelService | 提供了标签增删改查功能 |
-|ResourceLabelService | 提供了资源标签管理功能 |
-|UserLabelService | 提供了用户标签管理功能 |
-
-LabelManager架构图如下:
-![](../../../Images/Architecture/LinkisManager/LabelManager-01.png)
-
-
-
-### 3. 资源管理模块 linkis-resource-manager
-
-ResourceManager用于管理引擎和队列的所有资源分配
-
-| 核心接口/类 | 主要功能 |
-|------------|--------|
-|RequestResourceService | 提供了EngineConn资源申请功能 |
-|ResourceManagerService | 提供了EngineConn资源释放功能 |
-|LabelResourceService | 提供了标签对应资源管理功能 |
-
-
-ResourceManager架构图如下:
-
-![](../../../Images/Architecture/LinkisManager/ResourceManager-01.png)
-
-### 4. 监控模块 linkis-manager-monitor
-
-Monitor提供了节点状态监控的功能
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md
deleted file mode 100644
index 1c7bb99..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md
+++ /dev/null
@@ -1,145 +0,0 @@
-ResourceManager(简称RM),是Linkis的计算资源管理模块,所有的EngineConn(简称EC)、EngineConnManager(简称ECM),甚至包括Yarn在内的外部资源,都由RM负责统筹管理。RM能够基于用户、ECM或其它通过复杂标签定义的粒度对资源进行管控。
-
-### RM在Linkis中的作用
-![01](../../../Images/Architecture/rm-01.png)
-![02](../../../Images/Architecture/rm-02.png)
-RM作为Linkis
-Manager的一部分,主要作用为:维护ECM上报的可用资源信息,处理ECM提出的资源申请,记录成功申请后,EC在生命周期内实时上报的实际资源使用信息,并提供查询当前资源使用情况的相关接口。
-
-Linkis中,与RM产生交互的其它服务主要有:
-
-1.  引擎管理器,简称ECM:处理启动引擎连接器请求的微服务。ECM作为资源的提供者,负责向RM注册资源(register)和下线资源(unregister)。同时,ECM作为引擎的管理者,负责代替准备启动的新引擎连接器向RM申请资源。每一个ECM实例,均在RM中有一条对应的资源记录,包含它提供的总资源、保护资源等信息,并动态更新已使用资源。
-![03](../../../Images/Architecture/rm-03.png)
-2.  引擎连接器,简称EC,是用户作业的实际执行单元。同时,EC作为资源的实际使用者,负责向RM上报实际使用资源。每一个EC,均在RM中有一条对应的资源记录:在启动过程中,体现为锁定资源;在运行过程中,体现为已使用资源;在被结束之后,该资源记录随之被删除。
-![04](../../../Images/Architecture/rm-04.png)
-### 资源的类型与格式
-![05](../../../Images/Architecture/rm-05.png)
-如上图所示,所有的资源类均实现一个顶层的Resource接口,该接口定义了所有资源类均需要支持的计算和比较的方法,并进行相应的数学运算符的重载,使得资源之间能够像数字一样直接被计算和比较。
-
-| 运算符 | 对应方法    | 运算符 | 对应方法    |
-|--------|-------------|--------|-------------|
-| \+     | add         | \>     | moreThan    |
-| \-     | minus       | \<     | lessThan    |
-| \*     | multiply    | =      | equals      |
-| /      | divide      | \>=    | notLessThan |
-| \<=    | notMoreThan |        |             |
-
-当前支持的资源类型如下表所示,所有的资源都有对应的json序列化与反序列化方法,能够通过json格式进行存储和在网络间传递:
-
-| 资源类型              | 描述                                                   |
-|-----------------------|--------------------------------------------------------|
-| MemoryResource        | 内存资源                                               |
-| CPUResource           | CPU资源                                                |
-| LoadResource          | 同时具备内存与CPU的资源                                |
-| YarnResource          | Yarn队列资源(队列,队列内存,队列CPU,队列实例数)    |
-| LoadInstanceResource  | 服务器资源(内存,CPU,实例数)                        |
-| DriverAndYarnResource | 驱动器与执行器资源(同时具备服务器资源,Yarn队列资源) |
-| SpecialResource       | 其它自定义资源                                         |
-
-### 可用资源管理
-
-RM中的可用资源,主要有两个来源:ECM上报的可用资源,以及Configuration模块中根据标签配置的资源限制。  
-**ECM资源上报**:
-
-1.  ECM启动时,会广播ECM注册的消息,RM接收到消息后,根据消息中包含的内容进行资源注册,资源相关的内容包括:
-
-    1.  总资源:该ECM能够提供的资源总数。
-
-    2.  保护资源:当剩余资源小于该资源时,不再允许继续分配资源。
-
-    3.  资源类型:如LoadResource,DriverAndYarnResource等类型名称。
-
-    4.  实例信息:机器名加端口名。
-
-2.  RM在收到资源注册请求后,在资源表中新增一条记录,内容与接口的参数信息一致,并通过实例信息找到代表该ECM的标签,在资源、标签关联表中新增一条关联记录。
-
-3.  ECM在关闭时,会广播ECM关闭的消息,RM接收到消息后,根据消息中的ECM实例信息来进行资源的下线,即删除该ECM实例标签对应的资源和关联记录。
-
-**Configuration模块标签资源配置**:
-
-用户能够在Configuration模块中,根据不同的标签组合进行资源数量限制的配置,如限制User/Creator/EngineType组合的最大可用资源。
-
-RM通过RPC消息,以组合标签为查询条件,向Configuration模块查询资源信息,并转换成Resource对象参与后续的比较和记录。
-
-
-### 资源使用管理
-
-**接收用户的资源申请。**
-
-1.  LinkisManager在收到启动EngineConn的请求时,会调用RM的资源申请接口,进行资源申请。资源申请接口接受一个可选的时间参数,当申请资源的等待时间超出该时间参数的限制时,该资源申请将自动作为失败处理。
-
-**判断是否有足够的资源**
-
-即为判断剩余可用资源是否大于申请资源,如果大于或等于,则资源充足;否则资源不充足。
-
-1.  RM预处理资源申请中附带的标签信息,根据规则将原始的标签进行过滤、组合和转换等操作(如将User/Creator标签和EngineType标签进行组合),这使得后续的资源判断的粒度更加灵活多变。
-
-2.  在每个转换后的标签上逐一加锁,使得它们所对应的资源记录在资源申请的处理期间保持不变。
-
-3.  根据每个标签:
-
-    1.  通过Persistence模块从数据库中查询对应的资源记录,如果该记录包含剩余可用资源,则直接用来比较。
-
-    2.  如果没有直接的剩余可用资源记录,则通过[剩余可用资源=最大可用资源-已用资源-已锁定资源-保护资源]公式进行计算得出。
-
-    3.  如果没有最大可用资源记录,则请求Configuration模块,看是否有配置的资源信息,如果有则使用到公式中进行计算,如果没有则跳过针对这个标签的资源判断。
-
-    4.  如果没有任何资源记录,则跳过针对这个标签的资源判断。
-
-4.  只要有一个标签被判断为资源不充足,则资源申请失败,对每个标签逐一解锁。
-
-5.  只有所有标签都判断为资源充足的情况下,才成功通过资源申请,进入下一步。
-
-**锁定申请通过的资源**
-
-1.  根据申请通过的资源数量,在资源表中生成一条新的记录,并与每个标签进行关联。
-
-2.  如果对应的标签有剩余可用资源记录,则扣减对应的数量。
-
-3.  生成一个定时任务,在一定时间后检查这批锁定的资源是否被实际使用,如果超时未使用,则强制回收。
-
-4.  对每个标签进行解锁。
-
-**上报实际使用资源**
-
-1.  EngineConn启动后,广播资源使用消息。RM收到消息后,检查该EngineConn对应的标签是否有锁定资源记录,如果没有,则报错。
-
-2.  如果有锁定资源,则对该EngineConn有关联的所有标签进行加锁。
-
-3.  对每个标签,将对应的锁定资源记录转换为已使用资源记录。
-
-4.  解锁所有标签。
-
-**释放实际使用资源**
-
-1.  EngineConn结束生命周期后,广播资源回收消息。RM收到消息后,检查该EngineConn对应的标签是否有已使用资源记录。
-
-2.  如果有,则对该EngineConn有关联的所有标签进行加锁。
-
-3.  对每个标签,在已使用资源记录中减去对应的数量。
-
-4.  如果对应的标签有剩余可用资源记录,则增加对应的数量。
-
-5.  对每个标签解锁
-
-
-### 外部资源管理
-
-在RM中,为了更加灵活并有拓展性对资源进行分类,支持多集群的资源管控的同时,使得接入新的外部资源更加便利,在设计上进行了以下几点的考虑:
-
-1.  通过标签来对资源进行统一管理。资源注册后,与标签进行关联,使得资源的属性能够无限拓展。同时,资源申请也都带上标签,实现灵活的匹配。
-
-2.  将集群抽象成一个或多个标签,并在外部资源管理模块中维护每个集群标签对应的环境信息,实现动态的对接。
-
-3.  抽象出通用的外部资源管理模块,如需接入新的外部资源类型,只要实现固定的接口,即可将不同类型的资源信息转换为RM中的Resource实体,实现统一管理。
-![06](../../../Images/Architecture/rm-06.png)
-RM的其它模块,通过ExternalResourceService提供的接口来进行外部资源信息的获取。
-
-而ExternalResourceService通过资源类型和标签来获取外部资源的信息:
-
-1.  所有外部资源的类型、标签、配置等属性(如集群名称、Yarn的web
-    url、Hadoop版本等信息),都维护在linkis\_external\_resource\_provider表中。
-
-2.  针对每种资源类型,均有一个ExternalResourceProviderParser接口的实现,将外部资源的属性进行解析,将能够匹配到Label的信息转换成对应的Label,将能够作为参数去请求资源接口的都转换成params。最后构建成一个能够作为外部资源信息查询依据的ExternalResourceProvider实例。
-
-3.  根据ExternalResourceService方法的参数中的资源类型和标签信息,找到匹配的ExternalResourceProvider,根据其中的信息生成ExternalResourceRequest,正式调用外部资源提供的API,发起资源信息请求。
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/README.md
deleted file mode 100644
index 76ab242..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/README.md
+++ /dev/null
@@ -1,66 +0,0 @@
-## **背景**
-
-**Linkis0.X的架构主要存在以下问题**
-
-1.核心处理流程和层级模块边界模糊
-
--   Entrance 和 EngineManager 功能边界模糊
-
--   任务提交执行主流程不够清晰
-
--   扩展新引擎较麻烦,需要实现多个模块的代码
-
--   只支持计算请求场景,存储请求场景和常驻服务模式(Cluster)难以支持
-
-2.更丰富强大计算治理功能需求
-
--   计算任务管理策略支持度不够
-
--   标签能力不够强大,制约计算策略和资源管理
-
-Linkis1.0计算治理服务的新架构可以很好的解决这些问题。
-
-## **架构图**
-![](../../Images/Architecture/linkis-computation-gov-01.png)
-
-**作业流程优化:**
-Linkis1.0将优化Job的整体执行流程,从提交 —\> 准备 —\>
-执行三个阶段,来全面升级Linkis的Job执行架构,如下图所示:
-
-![](../../Images/Architecture/linkis-computation-gov-02.png)
-
-## **架构说明**
-
-### 1、Entrance
-
- Entrance作为计算类型任务的提交入口,提供任务的接收、调度和Job信息的转发能力,是从Linkis0.X的Entrance拆分出来的原生能力;
- 
- [进入Entrance架构设计](./Entrance/Entrance.md)
-
-### 2、Orchestrator
-
- Orchestrator 作为准备阶段的入口,从 Linkis0.X 的 Entrance 继承了解析Job、申请Engine和提交执行的能力;同时,Orchestrator将提供强大的编排和计算策略能力,满足多活、主备、事务、重放、限流、异构和混算等多种应用场景的需求。
- 
- [进入Orchestrator架构设计](../Orchestrator/README.md)
-
-### 3、LinkisManager
-
- LinkisManager作为Linkis的管理大脑,主要由AppManager、ResourceManager、LabelManager和EngineConnPlugin组成。
- 
- 1. ResourceManager 不仅具备 Linkis0.X 对 Yarn 和 Linkis EngineManager的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让ResourceManager具备跨集群、跨计算资源类型的全资源管理能力;
- 2. AppManager 将统筹管理所有的 EngineConnManager 和 EngineConn,EngineConn 的申请、复用、创建、切换、销毁等生命周期全交予 AppManager 进行管理;而 LabelManager 将基于多级组合标签,提供跨IDC、跨集群的 EngineConn 和 EngineConnManager 路由和管控能力;
- 3. EngineConnPlugin 主要用于降低新计算存储的接入成本,真正做到让用户只需要实现一个类,就能接入一个全新的计算存储引擎。
-
- [进入LinkisManager架构设计](./LinkisManager/README.md)
-
-### 4、EngineConnManager
-
- EngineConnManager (简称ECM)是 Linkis0.X EngineManager 的精简升级版。Linkis1.0下的ECM去除了引擎的申请能力,整个微服务完全无状态,将聚焦于支持各类 EngineConn 的启动和销毁。
- 
- [进入EngineConnManager架构设计](./EngineConnManager/README.md)
-
-### 5、EngineConn
-
-EngineConn 是 Linkis0.X Engine 的优化升级版本,将提供 EngineConn 和 Executor 两大模块,其中 EngineConn 用于连接底层的计算存储引擎,提供一个打通了底层各计算存储引擎的 Session 会话;Executor 则基于这个 Session 会话,提供交互式计算、流式计算、离线计算、数据存储的全栈计算能力支持。
-
-[进入EngineConn架构设计](./EngineConn/README.md)
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/EngineConn\346\226\260\345\242\236\346\265\201\347\250\213.md" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/EngineConn\346\226\260\345\242\236\346\265\201\347\250\213.md"
deleted file mode 100644
index 7be886a..0000000
--- "a/Linkis-Doc-master/zh_CN/Architecture_Documents/EngineConn\346\226\260\345\242\236\346\265\201\347\250\213.md"
+++ /dev/null
@@ -1,111 +0,0 @@
-# EngineConn新增流程
-
-EngineConn的新增,是Linkis计算治理的计算任务准备阶段的核心流程之一。它主要包括了Client端(Entrance或用户客户端)向LinkisManager发起一个新增EngineConn的请求,LinkisManager为用户按需、按标签规则,向EngineConnManager发起一个启动EngineConn的请求,并等待EngineConn启动完成后,将可用的EngineConn返回给Client的整个流程。
-
-如下图所示,接下来我们来详细说明一下整个流程:
-
-![EngineConn新增流程](../Images/Architecture/EngineConn新增流程/EngineConn新增流程.png)
-
-## 一、LinkisManager接收客户端请求
-
-**名词解释**:
-
-- LinkisManager:是Linkis计算治理能力的管理中枢,主要的职责为:
-  1. 基于多级组合标签,为用户提供经过复杂路由、资源管控和负载均衡后的可用EngineConn;
-  
-  2. 提供EC和ECM的全生命周期管理能力;
-  
-  3. 为用户提供基于多级组合标签的多Yarn集群资源管理功能。主要分为 AppManager(应用管理器)、ResourceManager(资源管理器)、LabelManager(标签管理器)三大模块,能够支持多活部署,具备高可用、易扩展的特性。
-
-&nbsp;&nbsp;&nbsp;&nbsp;AM模块接收到Client的新增EngineConn请求后,首先会对请求做参数校验,判断请求参数的合法性;其次是通过复杂规则选中一台最合适的EngineConnManager(ECM),以用于后面的EngineConn启动;接下来会向RM申请启动该EngineConn需要的资源;最后是向ECM请求创建EngineConn。
-
-下面将对四个步骤进行详细说明。
-
-### 1. 请求参数校验
-
-&nbsp;&nbsp;&nbsp;&nbsp;AM模块在接受到引擎创建请求后首先会做参数判断,首先会做请求用户和创建用户的权限判断,接着会对请求带上的Label进行检查。因为在AM后续的创建流程当中,Label会用来查找ECM和进行资源信息记录等,所以需要保证拥有必须的Label,现阶段一定需要带上的Label有UserCreatorLabel(例:hadoop-IDE)和EngineTypeLabel(例:spark-2.4.3)。
-
-### 2. EngineConnManager(ECM)选择
-
-&nbsp;&nbsp;&nbsp;&nbsp;ECM选择主要是完成通过客户端传递过来的Label去选择一个合适的ECM服务去启动EngineConn。这一步中首先会通过LabelManager去通过客户端传递过来的Label去注册的ECM中进行查找,通过按照标签匹配度进行顺序返回。在获取到注册的ECM列表后,会对这些ECM进行规则选择,现阶段已经实现有可用性检查、资源剩余、机器负载等规则。通过规则选择后,会将标签最匹配、资源最空闲、负载低的ECM进行返回。
-
-### 3. EngineConn资源申请
-
-1. 在获取到分配的ECM后,AM接着会通过调用EngineConnPluginServer服务请求本次客户端的引擎创建请求会使用多少的资源,这里会通过封装资源请求,主要包含Label、Client传递过来的EngineConn的启动参数、以及从Configuration模块获取到用户配置参数,通过RPC调用ECP服务去获取本次的资源信息。
-
-2. EngineConnPluginServer服务在接收到资源请求后,会先通过传递过来的标签找到对应的引擎标签,通过引擎标签选择对应引擎的EngineConnPlugin。然后通过EngineConnPlugin的资源生成器,对客户端传入的引擎启动参数进行计算,算出本次申请新EngineConn所需的资源,然后返回给LinkisManager。
-   
-   **名词解释:**
-- EgineConnPlugin:是Linkis对接一个新的计算存储引擎必须要实现的接口,该接口主要包含了这种EngineConn在启动过程中必须提供的几个接口能力,包括EngineConn资源生成器、EngineConn启动命令生成器、EngineConn引擎连接器。具体的实现可以参考Spark引擎的实现类:[SparkEngineConnPlugin](https://github.com/WeBankFinTech/Linkis/blob/master/linkis-engineconn-plugins/engineconn-plugins/spark/src/main/scala/com/webank/wedatasphere/linkis/engineplugin/spark/SparkEngineConnPlugin.scala)。
-
-- EngineConnPluginServer:是加载了所有的EngineConnPlugin,对外提供EngineConn的所需资源生成能力和EngineConn的启动命令生成能力的微服务。
-
-- EngineConnPlugin资源生成器(EngineConnResourceFactory):通过传入的参数,计算出本次EngineConn启动时需要的总资源。
-
-- EngineConn启动命令生成器(EngineConnLaunchBuilder):通过传入的参数,生成该EngineConn的启动命令,以提供给ECM去启动引擎。
-3. AM在获取到引擎资源后,会接着调用RM服务去申请资源,RM服务会通过传入的Label、ECM、本次申请的资源,去进行资源判断。首先会判断客户端对应Label的资源是否足够,然后再会判断ECM服务的资源是否足够,如果资源足够,则本次资源申请通过,并对对应的Label进行资源的加减。
-
-### 4. 请求ECM创建引擎
-
-1. 在完成引擎的资源申请后,AM会封装引擎启动的请求,通过RPC发送给对应的ECM进行服务启动,并获取到EngineConn的实例对象;
-2. AM接着会去通过EngineConn的上报信息判断EngineConn是否启动成功变成可用状态,如果是就会将结果进行返回,本次新增引擎的流程也就结束。
-
-## 二、 ECM启动EngineConn
-
-名词解释:
-
-- EngineConnManager(ECM):EngineConn的管理器,提供引擎的生命周期管理,同时向RM汇报负载信息和自身的健康状况。
-
-- EngineConnBuildRequest:LinkisManager传递给ECM的启动引擎命令,里面封装了该引擎的所有标签信息、所需资源和一些参数配置信息。
-
-- EngineConnLaunchRequest:包含了启动一个EngineConn所需的BML物料、环境变量、ECM本地必需环境变量、启动命令等信息,让ECM可以依此构建出一个完整的EngineConn启动脚本。
-
-ECM接收到LinkisManager传递过来的EngineConnBuildRequest命令后,主要分为三步来启动EngineConn:1. 请求EngineConnPluginServer,获取EngineConnPluginServer封装出的EngineConnLaunchRequest;2. 解析EngineConnLaunchRequest,封装成EngineConn启动脚本;3. 执行启动脚本,启动EngineConn。
-
-### 2.1 EngineConnPluginServer封装EngineConnLaunchRequest
-
-通过EngineConnBuildRequest的标签信息,拿到实际需要启动的EngineConn类型和对应版本,从EngineConnPluginServer的内存中获取到该EngineConn类型的EngineConnPlugin,通过该EngineConnPlugin的EngineConnLaunchBuilder,将EngineConnBuildRequest转换成EngineConnLaunchRequest。
-
-### 2.2 封装EngineConn启动脚本
-
-ECM获取到EngineConnLaunchRequest之后,将EngineConnLaunchRequest中的BML物料下载到本地,并检查EngineConnLaunchRequest要求的本地必需环境变量是否存在,校验通过后,将EngineConnLaunchRequest封装成一个EngineConn启动脚本
-
-### 2.3 执行启动脚本
-
-目前ECM只对Unix系统做了Bash命令的支持,即只支持Linux系统执行该启动脚本。
-
-启动前,会通过sudo命令,切换到对应的请求用户去执行该脚本,确保启动用户(即JVM用户)为Client端的请求用户。
-
-执行该启动脚本后,ECM会实时监听脚本的执行状态和执行日志,一旦执行状态返回非0,则立马向LinkisManager汇报EngineConn启动失败,整个流程完成;否则则一直监听启动脚本的日志和状态,直到该脚本执行完成。
-
-## 三、EngineConn初始化
-
-ECM执行了EngineConn的启动脚本后,EngineConn微服务正式启动。
-
-名词解释:
-
-- EngineConn微服务:指包含了一个EngineConn、一个或多个Executor,用于对计算任务提供计算能力的实际微服务。我们说的新增一个EngineConn,其实指的就是新增一个EngineConn微服务。
-
-- EngineConn:引擎连接器,是与底层计算存储引擎的实际连接单元,包含了与实际引擎的会话信息。它与Executor的差别,是EngineConn只是起到一个连接、一个客户端的作用,并不真正的去执行计算。如SparkEngineConn,其会话信息为SparkSession。
-
-- Executor:执行器,作为真正的计算存储场景执行器,是实际的计算存储逻辑执行单元,对EngineConn各种能力的具体抽象,提供交互式执行、订阅式执行、响应式执行等多种不同的架构能力。
-
-EngineConn微服务的初始化一般分为三个阶段:
-
-1. 初始化具体引擎的EngineConn。先通过Java main方法的命令行参数,封装出一个包含了相关标签信息、启动信息和参数信息的EngineCreationContext,通过EngineCreationContext初始化EngineConn,完成EngineConn与底层Engine的连接建立,如:SparkEngineConn会在该阶段初始化一个SparkSession,用于与一个Spark application建立了连通关系。
-
-2. 初始化Executor。EngineConn初始化之后,接下来会根据实际的使用场景,初始化对应的Executor,为接下来的用户使用,提供服务能力。比如:交互式计算场景的SparkEngineConn,会初始化一系列可以用于提交执行SQL、PySpark、Scala代码能力的Executor,支持Client往该SparkEngineConn提交执行SQL、PySpark、Scala等代码。
-
-3. 定时向LinkisManager汇报心跳,并等待EngineConn结束退出。当EngineConn对应的底层引擎异常、或是超过最大空闲时间、或是Executor执行完成、或是用户手动kill时,该EngineConn自动结束退出。
-
-----
-
-到了这里,EngineConn的新增流程就基本结束了,最后我们再来总结一下EngineConn的新增流程:
-
-- 客户端向LinkisManager发起新增EngineConn的请求;
-
-- LinkisManager校验参数合法性,先是根据标签选择合适的ECM,再根据用户请求确认本次新增EngineConn所需的资源,向LinkisManager的RM模块申请资源,申请通过后要求ECM按要求启动一个新的EngineConn;
-
-- ECM先请求EngineConnPluginServer获取一个包含了启动一个EngineConn所需的BML物料、环境变量、ECM本地必需环境变量、启动命令等信息的EngineConnLaunchRequest,然后封装出EngineConn的启动脚本,最后执行启动脚本,启动该EngineConn;
-
-- EngineConn初始化具体引擎的EngineConn,然后根据实际的使用场景,初始化对应的Executor,为接下来的用户使用,提供服务能力。最后定时向LinkisManager汇报心跳,等待正常结束或被用户终止。
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Job\346\217\220\344\272\244\345\207\206\345\244\207\346\211\247\350\241\214\346\265\201\347\250\213.md" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Job\346\217\220\344\272\244\345\207\206\345\244\207\346\211\247\350\241\214\346\265\201\347\250\213.md"
deleted file mode 100644
index a166df4..0000000
--- "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Job\346\217\220\344\272\244\345\207\206\345\244\207\346\211\247\350\241\214\346\265\201\347\250\213.md"
+++ /dev/null
@@ -1,165 +0,0 @@
-# Job提交准备执行流程
-
-计算任务(Job)的提交执行是Linkis提供的核心能力,它几乎串通了Linkis计算治理架构中的所有模块,在Linkis之中占据核心地位。
-
-我们将用户的计算任务从客户端提交开始,到最后的返回结果为止,整个流程分为三个阶段:提交 -> 准备 -> 执行,如下图所示:
-
-![计算任务整体流程图](../Images/Architecture/Job提交准备执行流程/计算任务整体流程图.png)
-
-其中:
-
-- Entrance作为提交阶段的入口,提供任务的接收、调度和Job信息的转发能力,是所有计算型任务的统一入口,它将把计算任务转发给Orchestrator进行编排和执行;
-
-- Orchestrator作为准备阶段的入口,主要提供了Job的解析、编排和执行能力。。
-
-- Linkis Manager:是计算治理能力的管理中枢,主要的职责为:
-  
-  1. ResourceManager:不仅具备对Yarn和Linkis EngineConnManager的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让ResourceManager具备跨集群、跨计算资源类型的全资源管理能力;
-  
-  2. AppManager:统筹管理所有的EngineConnManager和EngineConn,包括EngineConn的申请、复用、创建、切换、销毁等生命周期全交予AppManager进行管理;
-  
-  3. LabelManager:将基于多级组合标签,为跨IDC、跨集群的EngineConn和EngineConnManager路由和管控能力提供标签支持;
-  
-  4. EngineConnPluginServer:对外提供启动一个EngineConn的所需资源生成能力和EngineConn的启动命令生成能力。
-
-- EngineConnManager:是EngineConn的管理器,提供引擎的生命周期管理,同时向RM汇报负载信息和自身的健康状况。
-
-- EngineConn:是Linkis与底层计算存储引擎的实际连接器,用户所有的计算存储任务最终都会交由EngineConn提交给底层计算存储引擎。根据用户的不同使用场景,EngineConn提供了交互式计算、流式计算、离线计算、数据存储任务的全栈计算能力框架支持。
-
-接下来,我们将详细介绍计算任务从 提交 -> 准备 -> 执行 的三个阶段。
-
-## 一、提交阶段
-
-提交阶段主要是Client端 -> Linkis Gateway -> Entrance的交互,其流程如下:
-
-![提交阶段流程图](../Images/Architecture/Job提交准备执行流程/提交阶段流程图.png)
-
-1. 首先,Client(如前端或客户端)发起Job请求,Job请求信息精简如下(关于Linkis的具体使用方式,请参考 [如何使用Linkis](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/User_Manual/How_To_Use_Linkis.md)):
-
-```
-POST /api/rest_j/v1/entrance/submit
-```
-
-```json
-{
-    "executionContent": {"code": "show tables", "runType": "sql"},
-    "params": {"variable": {}, "configuration": {}},  //非必须
-    "source": {"scriptPath": "file:///1.hql"}, //非必须,仅用于记录代码来源
-    "labels": {
-        "engineType": "spark-2.4.3",  //指定引擎
-        "userCreator": "johnnwnag-IDE"  // 指定提交用户和提交系统
-    }
-}
-```
-
-2. Linkis-Gateway接收到请求后,根据URI ``/api/rest_j/v1/${serviceName}/.+``中的serviceName,确认路由转发的微服务名,这里Linkis-Gateway会解析出微服务名为entrance,将Job请求转发给Entrance微服务。需要说明的是:如果用户指定了路由标签,则在转发时,会根据路由标签选择打了相应标签的Entrance微服务实例进行转发,而不是随机转发。
-
-3. Entrance接收到Job请求后,会先简单校验请求的合法性,然后通过RPC调用JobHistory对Job的信息进行持久化,然后将Job请求封装为一个计算任务,放入到调度队列之中,等待被消费线程消费。
-
-4. 调度队列会为每个组开辟一个消费队列 和 一个消费线程,消费队列用于存放已经初步封装的用户计算任务,消费线程则按照FIFO的方式,不断从消费队列中取出计算任务进行消费。目前默认的分组方式为 Creator + User(即提交系统 + 用户),因此,即便是同一个用户,只要是不同的系统提交的计算任务,其实际的消费队列和消费线程都完全不同,完全隔离互不影响。(温馨提示:用户可以按需修改分组算法)
-
-5. 消费线程取出计算任务后,会将计算任务提交给Orchestrator,由此正式进入准备阶段。
-
-## 二、 准备阶段
-
-准备阶段主要有两个流程,一是向LinkisManager申请一个可用的EngineConn,用于接下来的计算任务提交执行,二是Orchestrator对Entrance提交过来的计算任务进行编排,将一个用户计算请求,通过编排转换成一个物理执行树,然后交给第三阶段的执行阶段去真正提交执行。
-
-#### 2.1 向LinkisManager申请可用EngineConn
-
-如果在LinkisManager中,该用户存在可复用的EngineConn,则直接锁定该EngineConn,并返回给Orchestrator,整个申请流程结束。
-
-如何定义可复用EngineConn?指能匹配计算任务的所有标签要求的,且EngineConn本身健康状态为Healthy(负载低且实际EngineConn状态为Idle)的,然后再按规则对所有满足条件的EngineConn进行排序选择,最终锁定一个最佳的EngineConn。
-
-如果该用户不存在可复用的EngineConn,则此时会触发EngineConn新增流程,关于EngineConn新增流程,请参数:[EngineConn新增流程](EngineConn新增流程.md) 。
-
-#### 2.2 计算任务编排
-
-Orchestrator主要负责将一个计算任务(JobReq),编排成一棵可以真正执行的物理执行树(PhysicalTree),并提供Physical树的执行能力。
-
-这里先重点介绍Orchestrator的计算任务编排能力,如下图:
-
-![编排流程图](../Images/Architecture/Job提交准备执行流程/编排流程图.png)
-
-其主要流程如下:
-
-- Converter(转换):完成对用户提交的JobReq(任务请求)转换为Orchestrator的ASTJob,该步骤会对用户提交的计算任务进行参数检查和信息补充,如变量替换等;
-
-- Parser(解析):完成对ASTJob的解析,将ASTJob拆成由ASTJob和ASTStage组成的一棵AST树。
-
-- Validator(校验): 完成对ASTJob和ASTStage的检验和信息补充,如代码检查、必须的Label信息补充等。
-
-- Planner(计划):将一棵AST树转换为一棵Logical树。此时的Logical树已经由LogicalTask组成,包含了整个计算任务的所有执行逻辑。
-
-- Optimizer(优化阶段):将一棵Logical树转换为Physica树,并对Physical树进行优化。
-
-一棵Physical树,其中的很多节点都是计算策略逻辑,只有中间的ExecTask,才真正封装了将用户计算任务提交给EngineConn进行提交执行的执行逻辑。如下图所示:
-
-![Physical树](../Images/Architecture/Job提交准备执行流程/Physical树.png)
-
-不同的计算策略,其Physical树中的JobExecTask 和 StageExecTask所封装的执行逻辑各不相同。
-
-如多活计算策略下,用户提交的一个计算任务,其提交给不同集群的EngineConn进行执行的执行逻辑封装在了两个ExecTask中,而相关的多活策略逻辑则体现在了两个ExecTask的父节点StageExecTask(End)之中。
-
-这里举多活计算策略下的多读场景。
-
-多读时,实际只要求一个ExecTask返回结果,该Physical树就可以标记为执行成功并返回结果了,但Physical树只具备按依赖关系进行依次执行的能力,无法终止某个节点的执行,且一旦某个节点被取消执行或执行失败,则整个Physical树其实会被标记为执行失败,这时就需要StageExecTask(End)来做一些特殊的处理,来保证既可以取消另一个ExecTask,又能把执行成功的ExecTask所产生的结果集继续往上传,让Physical树继续往上执行。这就是StageExecTask所代表的计算策略执行逻辑。
-
-Linkis Orchestrator的编排流程与很多SQL解析引擎(如Spark、Hive的SQL解析器)存在相似的地方,但实际上,Linkis Orchestrator是面向计算治理领域针对用户不同的计算治理需求,而实现的解析编排能力,而SQL解析引擎是面向SQL语言的解析编排。这里做一下简单区分:
-
-1. Linkis Orchestrator主要想解决的,是不同计算任务对计算策略所引发出的编排需求。如:用户想具备多活的能力,则Orchestrator会为用户提交的一个计算任务,基于“多活”的计算策略需求,编排出一棵Physical树,从而做到往多个集群去提交执行这个计算任务,并且在构建整个Physical树的过程中,已经充分考虑了各种可能存在的异常场景,并都已经体现在了Physical树中。
-
-2. Linkis Orchestrator的编排能力与编程语言无关,理论上只要是Linkis已经对接的引擎,其支持的所有编程语言都支持编排;而SQL解析引擎只关心SQL的解析和执行,只负责将一条SQL解析成一颗可执行的Physical树,最终计算出结果。
-
-3. Linkis Orchestrator也具备对SQL的解析能力,但SQL解析只是Orchestrator Parser针对SQL这种编程语言的其中一种解析实现。Linkis Orchestrator的Parser也考虑引入Apache Calcite对SQL进行解析,支持将一条跨多个计算引擎(必须是Linkis已经对接的计算引擎)的用户SQL,拆分成多条子SQL,在执行阶段时分别提交给对应的计算引擎进行执行,最后选择一个合适的计算引擎进行汇总计算。
-
-关于Orchestrator的编排详细介绍,请参考:[Orchestrator架构设计](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_architecture_doc.md)
-
-经过了Linkis Orchestrator的解析编排后,用户的计算任务已经转换成了一颗可被执行的Physical树。Orchestrator会将该Physical树提交给Orchestrator的Execution模块,进入最后的执行阶段。
-
-## 三、执行阶段
-
-执行阶段主要分为如下两步,这两步是Linkis Orchestrator提供的最后两阶段的能力:
-
-![执行阶段流程图](../Images/Architecture/Job提交准备执行流程/执行阶段流程图.png)
-
-其主要流程如下:
-
-- Execution(执行):解析Physical树的依赖关系,按照依赖从叶子节点开始依次执行。
-
-- Reheater(再热):一旦Physical树有节点执行完成,都会触发一次再热。再热允许依照Physical树的实时执行情况,动态调整Physical树,继续进行执行。如:检测到某个叶子节点执行失败,且该叶子节点支持重试(如失败原因是抛出了ReTryExecption),则自动调整Physical树,在该叶子节点上面添加一个内容完全相同的重试父节点。
-
-我们回到Execution阶段,这里重点介绍封装了将用户计算任务提交给EngineConn的ExecTask节点的执行逻辑。
-
-1. 前面有提到,准备阶段的第一步,就是向LinkisManager获取一个可用的EngineConn,ExecTask拿到这个EngineConn后,会通过RPC请求,将用户的计算任务提交给EngineConn。
-
-2. EngineConn接收到计算任务之后,会通过线程池异步提交给底层的计算存储引擎,然后马上返回一个执行ID。
-
-3. ExecTask拿到这个执行ID后,后续可以通过该执行ID异步去拉取计算任务的执行情况(如:状态、进度、日志、结果集等)。
-
-4. 同时,EngineConn会通过注册的多个Listener,实时监听底层计算存储引擎的执行情况。如果该计算存储引擎不支持注册Listener,则EngineConn会为计算任务启动守护线程,定时向计算存储引擎拉取执行情况。
-
-5. EngineConn将拉取到的执行情况,通过RCP请求,实时传回Orchestrator所在的微服务。
-
-6. 该微服务的Receiver接收到执行情况后,会通过ListenerBus进行广播,Orchestrator的Execution消费该事件并动态更新Physical树的执行情况。
-
-7. 计算任务所产生的结果集,会在EngineConn端就写入到HDFS等存储介质之中。EngineConn通过RPC传回的只是结果集路径,Execution消费事件,并将获取到的结果集路径通过ListenerBus进行广播,使Entrance向Orchestrator注册的Listener能消费到该结果集路径,并将结果集路径写入持久化到JobHistory之中。
-
-8. EngineConn端的计算任务执行完成后,通过同样的逻辑,会触发Execution更新Physical树该ExecTask节点的状态,使得Physical树继续往上执行,直到整棵树全部执行完成。这时Execution会通过ListenerBus广播计算任务执行完成的状态。
-
-9. Entrance向Orchestrator注册的Listener消费到该状态事件后,向JobHistory更新Job的状态,整个任务执行完成。
-
-----
-
-最后,我们再来看下Client端是如何得知计算任务状态变化,并及时获取到计算结果的,具体如下图所示:
-
-![结果获取流程](../Images/Architecture/Job提交准备执行流程/结果获取流程.png)
-
-具体流程如下:
-
-1. Client端定时轮询请求Entrance,获取计算任务的状态。
-
-2. 一旦发现状态翻转为成功,则向JobHistory发送获取Job信息的请求,拿到所有的结果集路径
-
-3. 通过结果集路径向PublicService发起查询文件内容的请求,获取到结果集的内容。
-
-自此,整个Job的提交 -> 准备 -> 执行 三个阶段全部完成。
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Linkis1.0\344\270\216Linkis0.X\347\232\204\345\214\272\345\210\253\347\256\200\350\277\260.md" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Linkis1.0\344\270\216Linkis0.X\347\232\204\345\214\272\345\210\253\347\256\200\350\277\260.md"
deleted file mode 100644
index 78d2d9d..0000000
--- "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Linkis1.0\344\270\216Linkis0.X\347\232\204\345\214\272\345\210\253\347\256\200\350\277\260.md"
+++ /dev/null
@@ -1,98 +0,0 @@
-## 1. 简述
-
-&nbsp;&nbsp;&nbsp;&nbsp;  首先,Linkis1.0 架构下的 Entrance 和 EngineConnManager(原EngineManager)服务与 **引擎** 已完全无关,即:
-                             在 Linkis1.0 架构下,每个引擎无需再配套实现并启动对应的 Entrance 和 EngineConnManager,Linkis1.0 的每个 Entrance 和 EngineConnManager 都可以给所有引擎共用。
-                          
-&nbsp;&nbsp;&nbsp;&nbsp;  其次,Linkis1.0 新增了Linkis-Manager服务用于对外提供 AppManager(应用管理)、ResourceManager(资源管理,原ResourceManager服务)和 LabelManager(标签管理)的能力。
-
-&nbsp;&nbsp;&nbsp;&nbsp;  然后,为了降低大家实现和部署一个新引擎的难度,Linkis 1.0 重新架构了一个叫 EngineConnPlugin 的模块,每个新引擎只需要实现 EngineConnPlugin 接口即可,
-Linkis EngineConnPluginServer 支持以插件的形式动态加载 EngineConnPlugin(新引擎),一旦 EngineConnPluginServer 加载成功,EngineConnManager 便可为用户快速启动一个该引擎实例。
-                          
-&nbsp;&nbsp;&nbsp;&nbsp;  最后,对Linkis的所有微服务进行了归纳分类,总体分为了三个大层次:公共增强服务、计算治理服务和微服务治理服务,从代码层级结构、微服务命名和安装目录结构等多个方面来规范Linkis1.0的微服务体系。
-
-
-##  2. 主要特点
-
-1.  **强化计算治理**,Linkis1.0主要从引擎管理、标签管理、ECM管理和资源管理等几个方面,全面强化了计算治理的综合管控能力,基于标签化的强大管控设计理念,使得Linkis1.0向多IDC化、多集群化、多容器化,迈出了坚实的一大步。
-
-2.  **简化用户实现新引擎**,EnginePlugin用于将原本实现一个新引擎,需要实现的相关接口和类,以及需要拆分的Entrance-EngineManager-Engine三层模块体系,融合到了一个接口之中,简化用户实现新引擎的流程和代码,真正做到只要实现一个类,就能接入一个新引擎。
-
-3.  **全栈计算存储引擎支持**,实现对计算请求场景(如Spark)、存储请求场景(如HBase)和常驻集群型服务(如SparkStreaming)的全面覆盖支持。
-
-4.  **高级计算策略能力改进**,新增Orchestrator实现丰富计算任务管理策略,且支持基于标签的解析和编排。
-
-5.  **安装部署改进**  优化一键安装脚本,支持容器化部署,简化用户配置。
-
-## 3. 服务对比
-
-&nbsp;&nbsp;&nbsp;&nbsp;  请参考以下两张图:
-
-&nbsp;&nbsp;&nbsp;&nbsp;  Linkis0.X 微服务列表如下:
-
-![Linkis0.X服务列表](./../../en_US/Images/Architecture/Linkis0.X-services-list.png)
-
-&nbsp;&nbsp;&nbsp;&nbsp;  Linkis1.0 微服务列表如下:
-
-![Linkis1.0服务列表](./../../en_US/Images/Architecture/Linkis1.0-services-list.png)
-
-&nbsp;&nbsp;&nbsp;&nbsp;  从上面两个图中看,Linkis1.0 将服务分为了三类服务:计算治理(英文缩写CG)/微服务治理(MG)/公共增强服务(PS)。其中:
-
-1. 计算治理的一大变化是,Entrance 和 EngineConnManager服务与引擎再不相关,实现一个新引擎只需实现 EngineConnPlugin插件即可,EngineConnPluginServer会动态加载 EngineConnPlugin 插件,做到引擎热插拔式更新;
-
-2. 计算治理的另一大变化是,LinkisManager作为 Linkis 的管理大脑,抽象和定义了 AppManager(应用管理)、ResourceManager(资源管理)和LabelManager(标签管理);
-
-3. 微服务治理服务,将0.X部分的Eureka和Gateway服务进行了归并统一,并对Gateway服务进行了功能增强,支持按照Label进行路由转发;
-
-4. 公共增强服务,主要将0.X部分的BML服务/上下文服务/数据源服务/公共服务进行了优化和归并统一,便于大家管理和查看。
-
-## 4. Linkis Manager简介
-
-&nbsp;&nbsp;&nbsp;&nbsp;  Linkis Manager 作为 Linkis 的管理大脑,主要由 AppManager、ResourceManager 和 LabelManager 组成。
-
-&nbsp;&nbsp;&nbsp;&nbsp;  ResourceManager 不仅具备 Linkis0.X 对 Yarn 和 Linkis EngineManager 的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让 ResourceManager 具备跨集群、跨计算资源类型的全资源管理能力;
-
-&nbsp;&nbsp;&nbsp;&nbsp;  AppManager 将统筹管理所有的 EngineConnManager 和 EngineConn,EngineConn 的申请、复用、创建、切换、销毁等生命周期全交予 AppManager进行管理;
-
-&nbsp;&nbsp;&nbsp;&nbsp;  而 LabelManager 将基于多级组合标签,提供跨IDC、跨集群的 EngineConn 和 EngineConnManager 路由和管控能力;
-
-## 5. Linkis EngineConnPlugin简介
-
-&nbsp;&nbsp;&nbsp;&nbsp;  EngineConnPlugin 主要用于降低新计算存储的接入和部署成本,真正做到让用户“只需实现一个类,就能接入一个全新计算存储引擎;只需执行一下脚本,即可快速部署一个全新引擎”。
-
-### 5.1 新引擎实现对比
-
-&nbsp;&nbsp;&nbsp;&nbsp;  以下是用户Linkis0.X实现一个新引擎需要实现的相关接口和类:
-
-![Linkis0.X 如何实现一个全新引擎](./../../en_US/Images/Architecture/Linkis0.X-NewEngine-architecture.png)
-
-&nbsp;&nbsp;&nbsp;&nbsp;  以下为Linkis1.0.0,实现一个新引擎,用户需实现的接口和类:
-
-![Linkis1.0 如何实现一个全新引擎](./../../en_US/Images/Architecture/Linkis1.0-NewEngine-architecture.png)
-
-&nbsp;&nbsp;&nbsp;&nbsp;  其中EngineConnResourceFactory和EngineLaunchBuilder为非必需实现接口,只有EngineConnFactory为必需实现接口。
-
-### 5.2 新引擎启动流程
-
-&nbsp;&nbsp;&nbsp;&nbsp;  EngineConnPlugin 提供了 Server 服务,用于启动和加载所有的引擎插件,以下给出了一个新引擎启动,访问了 EngineConnPlugin-Server 的全部流程:
-
-![Linkis 引擎启动流程](./../../en_US/Images/Architecture/Linkis1.0-newEngine-initialization.png)
-
-## 6. Linkis EngineConn简介
-
-&nbsp;&nbsp;&nbsp;&nbsp;  EngineConn,即原 Engine 模块,作为 Linkis 与底层计算存储引擎进行连接和交互的实际单元,是 Linkis 提供计算存储能力的基础。
-
-&nbsp;&nbsp;&nbsp;&nbsp;  Linkis1.0 的 EngineConn 主要由 EngineConn 和 Executor构成。其中:
-
-a)	EngineConn 为连接器,包含引擎与具体集群的会话信息。它只是起到一个连接,一个客户端的作用,并不真正的去执行计算。
-
-b)	Executor 为执行器,作为真正的计算场景执行器,是实际的计算逻辑执行单元,也对引擎各种具体能力的抽象,例如提供加锁、访问状态、获取日志等多种不同的服务。
-
-c)	Executor 通过 EngineConn 中的会话信息进行创建,一个引擎类型可以支持多种不同种类的计算任务,每种对应一个 Executor 的实现,计算任务将被提交到对应的 Executor 进行执行。
-这样,同一个引擎能够根据不同的计算场景提供不同的服务。比如常驻式引擎启动后不需要加锁,一次性引擎启动后不需要支持 Receiver 和访问状态等。
-
-d)	采用 Executor 和 EngineConn 分离的方式的好处是,可以避免 Receiver 耦合业务逻辑,本身只保留 RPC 通信功能。将服务分散在多个 Executor 模块中,并且抽象成几大类引擎:交互式计算引擎、流式引擎、一次性引擎等等可能用到的,构建成统一的引擎框架,便于后期的扩充。
-这样不同类型引擎可以根据需要分别加载其中需要的能力,大大减少引擎实现的冗余。
-
-&nbsp;&nbsp;&nbsp;&nbsp;  如下图所示:
-
-![Linkis EngineConn架构图](./../../en_US/Images/Architecture/Linkis1.0-EngineConn-architecture.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/Gateway.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/Gateway.md
deleted file mode 100644
index f84d9dd..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/Gateway.md
+++ /dev/null
@@ -1,30 +0,0 @@
-## Gateway 架构设计
-
-#### 简述
-Gateway网关是Linkis接受客户端以及外部请求的首要入口,例如接收作业执行请求,而后将执行请求转发到具体的符合条件的Entrance服务中去。
-整个架构底层基于SpringCloudGateway做扩展实现,上层叠加了与Http请求解析,会话权限,标签路由和WebSocket多路转发等相关的模组设计,整体架构可见如下。
-
-### 整体架构示意图
-
-![Gateway整体架构示意图](../../Images/Architecture/Gateway/gateway_server_global.png)
-
-#### 架构说明
-- gateway-core: Gateway的核心接口定义模块,主要定义了GatewayParser和GatewayRouter接口,分别对应请求的解析和根据请求进行路由选择;同时还提供了SecurityFilter的权限校验工具类。
-- spring-cloud-gateway: 该模块集成了所有与SpringCloudGateway相关的依赖,对HTTP和WebSocket两种协议类型的请求分别进行了处理转发。
-- gateway-server-support: Gateway的服务驱动模块,依赖spring-cloud-gateway模块,对GatewayParser、GatewayRouter分别做了实现,其中DefaultLabelGatewayRouter提供了请求标签路由的功能。
-- gateway-httpclient-support: 提供了Http访问Gateway服务的客户端通用类,z可以基于做多实现。
-- instance-label: 外联的实例标签模块,提供InsLabelService服务接口,用于路由标签的创建以及与应用实例关联。
-
-涉及的详细设计如下:
-
-#### 一、请求路由转发(带标签信息)
-请求的链路首先经SpringCloudGateway的Dispatcher分发后,进入网关的过滤器链表,进入GatewayAuthorizationFilter 和 SpringCloudGatewayWebsocketFilter 两大过滤器逻辑,过滤器集成了DefaultGatewayParser和DefaultGatewayRouter。
-从Parser到Router,执行相应的parse和route方法,DefaultGatewayParser和DefaultGatewayRouter内部还包含了自定义的Parser和Router,按照优先级顺序执行。最后由DefaultGatewayRouter输出路由选中的服务实例ServiceInstance,交由上层进行转发。
-现我们以具有标签信息的作业执行请求转发为例子,绘制如下流程图:  
-![Gateway请求路由转发](../../Images/Architecture/Gateway/gateway_server_dispatcher.png)
-
-
-#### 二、WebSocket连接转发管理
-默认情况下SpringCloudGateway对WebSocket请求只做一次路由转发,无法做动态的切换,而在Linkis Gateway架构下,每次信息的交互都会附带相应的uri地址,引导路由到不同的后端服务。
-除了负责与前端、客户端连接的webSocketService以及负责和后台服务连接的webSocketClient, 中间会缓存一系列GatewayWebSocketSessionConnection列表,一个GatewayWebSocketSessionConnection代表一个session会话与多个后台ServiceInstance的连接。  
-![Gateway的WebSocket转发管理](../../Images/Architecture/Gateway/gatway_websocket.png)
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/README.md
deleted file mode 100644
index a5bbc92..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/README.md
+++ /dev/null
@@ -1,23 +0,0 @@
-## **背景**
-
-微服务治理包含了Gateway、Eureka、Open Feign等三个主要的微服务。用来解决Linkis的服务发现与注册、统一网关、请求转发、服务间通信、负载均衡等问题。同时Linkis1.0还会提供对Nacos的支持;整个Linkis是一个完全的微服务架构,每个业务流程都是需要多个微服务协同完成的。
-
-## **架构图**
-
-![](../../Images/Architecture/linkis-microservice-gov-01.png)
-
-## **架构描述**
-
-1. Linkis Gateway作为Linkis的网关入口,主要承担了请求转发、用户访问认证、WebSocket通信等职责。Linkis1.0的Gateway还新增了基于Label的路由转发能力。Linkis在Spring
-Cloud Gateway中,实现了WebSocket路由转发器,用于与客户端建立WebSocket连接,建立连接成功后,会自动分析客户端的WebSocket请求,通过规则判断出请求该转发给哪个后端微服务,从而将WebSocket请求转发给对应的后端微服务实例。
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[进入Linkis Gateway](Gateway.md)
-
-2. Linkis Eureka
-主要负责服务注册与发现,Eureka由多个instance(服务实例)组成,这些服务实例可以分为两种:Eureka Server和Eureka Client。为了便于理解,我们将Eureka client再分为Service
-Provider和Service Consumer。Eureka Server 提供服务注册和发现,Service Provider服务提供方,将自身服务注册到Eureka,从而使服务消费方能够找到Service
-Consumer服务消费方,从Eureka获取注册服务列表,从而能够消费服务。
-
-3. Linkis基于Feign实现了一套自己的底层RPC通信方案。Linkis RPC作为底层的通信方案,将提供SDK集成到有需要的微服务之中。一个微服务既可以作为请求调用方,也可以作为请求接收方。作为请求调用方时,将通过Sender请求目标接收方微服务的Receiver,作为请求接收方时,将提供Receiver用来处理请求接收方Sender发送过来的请求,以便完成同步响应或异步响应。
-   
-   ![](../../Images/Architecture/linkis-microservice-gov-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Computation_Orchestrator_architecture.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Computation_Orchestrator_architecture.md
deleted file mode 100644
index 6787bb4..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Computation_Orchestrator_architecture.md
+++ /dev/null
@@ -1,18 +0,0 @@
-## **Computation-Orchestrator架构**
-
-### **一. Computation-Orchestrator概念**
-
-Computation-Orchestrator是Orchestrator的标准实现,支持交互式引擎的任务编排。Computation-Orchestrator提供了Converter、Parser、Validator、Planner、Optimizer、Execution、Reheater的常用实现方法。Computation-Orchestrator与AM对接,负责交互式任务执行,可以与Entrance对接,也可以与其它任务提交端直接对接,比如IOClient。Computation-Orchestrator同时支持同步和异步方式提交任务,并且支持获取多个Session实现隔离,
-
-### **二. Computation-Orchestrator架构**
-
-Entrance提交任务到Computation-Orchestrator执行,会同时注册日志、进度和结果集的Listener。任务执行过程中,会收到任务日志、任务进度,都会调用已注册的listener,将任务信息返回给Entrance。任务执行结束后,会生成结果集的Response,并调用结果集Listener。其中,Orchestrator支持Entrance提交绑定单个EngineConn的任务,通过任务中添加BindEngineLabel实现。
-
-![](../../Images/Architecture/orchestrator/computation-orchestrator/linkis-computation-orchestrator-01.png)
-
-### **三. Computation-Orchestrator执行流程**
-
-Computation-Orchestrator执行流程如下图所示
-
-![](../../Images/Architecture/orchestrator/computation-orchestrator/linkis-computation-orchestrator-02.png)
-
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/1.0\344\270\255\347\224\250\346\210\267\351\234\200\345\256\236\347\216\260\347\232\204\346\216\245\345\217\243\345\222\214\347\261\273.png" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/1.0\344\270\255\347\224\250\346\210\267\351\234\200\345\256\236\347\216\260\347\232\204\346\216\245\345\217\243\345\222\214\347\261\273.png"
deleted file mode 100644
index 4830d0f..0000000
Binary files "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/1.0\344\270\255\347\224\250\346\210\267\351\234\200\345\256\236\347\216\260\347\232\204\346\216\245\345\217\243\345\222\214\347\261\273.png" and /dev/null differ
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\344\272\244\344\272\222\346\265\201\347\250\213.png" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\344\272\244\344\272\222\346\265\201\347\250\213.png"
deleted file mode 100644
index 9e76bdd..0000000
Binary files "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\344\272\244\344\272\222\346\265\201\347\250\213.png" and /dev/null differ
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\347\233\270\345\205\263\346\216\245\345\217\243\345\222\214\347\261\273.png" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\347\233\270\345\205\263\346\216\245\345\217\243\345\222\214\347\261\273.png"
deleted file mode 100644
index 0c20d81..0000000
Binary files "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\347\233\270\345\205\263\346\216\245\345\217\243\345\222\214\347\261\273.png" and /dev/null differ
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_CheckRuler.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_CheckRuler.md
deleted file mode 100644
index 6c89f13..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_CheckRuler.md
+++ /dev/null
@@ -1,27 +0,0 @@
-CheckRuler架构设计
-======
-
-CheckRuler用于在Converter和Validator之前进行检查的规则,用于检验传递参数的的合法性和完整性,除了自带的几种必要的Ruler,其余可以根据用户自身需要进行实现。
-
-**Convert阶段:**
-
-| 类名                                     | 继承类               | 作用                    |
-|------------------------------------------|----------------------|-------------------------|
-| JobReqParamCheckRuler                    | ConverterCheckRulter | 校验提交的job参数完整性 |
-| PythonCodeConverterCheckRuler            | ConverterCheckRulter | Python代码规范性检测    |
-| ScalaCodeConverterCheckRuler             | ConverterCheckRulter | Scala代码规范检测       |
-| ShellDangerousGrammarConverterCheckRuler | ConverterCheckRulter | Shell脚本代码规范性检测 |
-| SparkCodeCheckConverterCheckRuler        | ConverterCheckRulter | Spark代码规范性检测     |
-| SQLCodeCheckConverterCheckRuler          | ConverterCheckRulter | SQL代码规范性检测       |
-| SQLLimitConverterCheckRuler              | ConverterCheckRulter | SQL代码长度检测         |
-| VarSubstitutionConverterCheckRuler       | ConverterCheckRulter | 变量替换规则校验        |
-
-**Validator阶段:**
-
-| 类名                          | 继承类                 | 作用                |
-|-------------------------------|------------------------|---------------------|
-| LabelRegularCheckRuler        | ValidatorCheckRuler    | Job的标签合法性校验 |
-| DefaultLabelRegularCheckRuler | LabelRegularCheckRuler | 实现类              |
-| RouteLabelRegularCheckRuler   | LabelRegularCheckRuler | 实现类              |
-
-如果需要自定义新的validator阶段的校验规则,自定义校验更多的标签类型,可以继承LabelRegularCheckRuler,重写customLabel值即可
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_ECMP_architecture.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_ECMP_architecture.md
deleted file mode 100644
index 6ea3abf..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_ECMP_architecture.md
+++ /dev/null
@@ -1,32 +0,0 @@
-EngineConnPlugin架构设计
-------------------------
-
-EngineConnPlugin用于将原本实现一个新引擎,需要实现的相关接口和类,以及需要拆分的Entrance-EngineManager-Engine三层模块体系,融合到了一个接口之中,简化用户实现新引擎的流程和代码,真正做到只要实现一个类,就能接入一个新引擎。
-
-### EngineConnPlugin 架构实现
-
-1、Linkis 0.X版本痛点与思考
-
-Linkis
-0.X版本没有Plugin的概念,用户新增一个引擎,需要同时实现Entrance、EngineManager、Engine相关接口,开发工作量和维护工作量都较大,修改也比较复杂。
-
-以下是用户Linkis0.X实现一个新引擎需要实现的相关接口和类:
-
-![](Images/相关接口和类.png)
-
-2、新版本的改进
-
-Linkis
-1.0版本重构了引擎从创建到任务执行的整个逻辑,将Entrance简化为一个服务,通过标签来对接不同的Engine、EngineManager也会简化为一个。Engine定义为EngineConn连接器+Executor执行器,并且抽象成多个服务和模块,由用户根据需要灵活选取需要的服务和模块。这样大大减少了新增引擎的开发和维护工作量。并且plugin会将引擎的lib和conf动态添加到bml进行版本管理。
-
-以下为Linkis1.0.0,实现一个新引擎,用户需实现的接口和类:
-
-![](Images/1.0中用户需实现的接口和类.png)
-
-其中EngineConnResourceFactory和EngineLaunchBuilder为非必需实现接口,只有EngineConnFactory为必需实现接口。
-
-### EngineConnPlugin交互流程
-
-EngineConnPlugin提供了Server服务,用于启动和加载所有的引擎插件,以下给出了一个新引擎启动,访问了EngineConnPlugin-Server的全部流程:
-
-![](Images/交互流程.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Execution_architecture_doc.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Execution_architecture_doc.md
deleted file mode 100644
index 1bf3e5f..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Execution_architecture_doc.md
+++ /dev/null
@@ -1,19 +0,0 @@
-Orchestrator-Execution架构设计
-===
-
-
-## 一. Execution概念
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Orchestrator-Execution模块是Orchestrator的执行模块,用于调度执行编排后的PhysicalTree,在执行的时候会从JobEndExecTask开始进行依赖执行。Execution的调用有Orchestration的执行和异步执行发起,然后Execution负责调度执行RootExecTask(PhysicalTree的根节点)整合树的ExecTask运行,并封装所有execTask的执行响应进行返回。执行采用生产者消费者异步执行模式进行运行。
-
-## 二. Execution架构
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Execution在接受到RootExecTask执行后,会将RootExecTask给到TaskManager进行调度执行(生产),然后TaskComsumer会从TaskManager获取现在可以依赖执行的任务进行消费执行,拿到可以运行的ExecTask后会提交给TaskScheduler进行提交执行。
-
-![execution](../../Images/Architecture/orchestrator/execution/execution.png)
-
-不管是异步执行和同步执行,都是通过上面的流程进行调度异步执行,同步执行会调用ExecTask的waitForCompleted方法,完成同步响应获取。整个执行过程中ExecTask的状态、结果集、日志等信息通过ListenerBus进行投递和通知。
-
-## 三. Execution整体流程
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Execution的整体执行流程如下所示,下图已交互式执行(ComputationExecution)流程为例:
-
-![execution01](../../Images/Architecture/orchestrator/execution/execution01.png)
-
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Operation_architecture_doc.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Operation_architecture_doc.md
deleted file mode 100644
index 94fd889..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Operation_architecture_doc.md
+++ /dev/null
@@ -1,26 +0,0 @@
-Orchestrator-Operation架构设计
-===
-
-## 一. Operation概念
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Operation操作是用于扩展异步执行期间对任务的额外操作,在调用Orchestration的异步执行后,调用者获取到的是OrchestrationFuture,该接口里面只提供了cancel、waitForCompleted、getResponse等操作任务的方法。但是当我们需要获取任务日志、进度、暂停任务时没有调用人口,这也是Operation定义的初衷,用于对外扩展更多对异步运行的任务的额外能力。定义如下:
-
-
-## 二. Operation类图
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Operation采用的是用户扩展的方式,用户需要扩展操作时,只需要按照我们的Operation接口实现对应的类,然后注册到Orchestrator,不需要改动底层代码即可以拥有对应的操作。整体类图如下:
-
-![operation_class](../../Images/Architecture/orchestrator/operation/operation_class.png)
-
-
-## 三. Operation使用
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Operation的使用主要分为两步,首先是Operation注册,然后是Operation调用:
-1. 注册方式,首先是按照第二章的Operation接口实现对应的Operation实现类,然后通过`OrchestratorSessionBuilder`完成Operation的注册,这样通过`OrchestratorSessionBuilder`创建出来的OrchestratorSession中的SessionState是持有Operation的;
-2. Operation的使用需要在使用通过OrchestratorSession完成编排后,调用Orchestration的异步执行方法asyncExecute获取OrchestrationFuture才可以进行;
-3. 接着通过Operation操作name,如“LOG”日志,调用`OrchestrationFuture.operate("LOG")` 进行操作获取对应Operation的返回对象,
-
-## 四. Operation例子
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;以下通过日志操作来为例进行说明,LogOperation的定义在第二章有说明,LogOperation通过实现Operation和TaskLogListener两个接口。整体日志获取流程如下:
-1. 当Orchestrator接收到任务日志后,会通过listenerBus推送event给到LogOperation进行消费;
-2. 当LogOperation获取到日志后,会调用日志处理器LogProcessor进行写日志(writeLog),该LogProcessor会通过调用方调用方法`OrchestrationFuture.operate("LOG")`获取到;
-3. LogProcessor有两种给到外部获取日志的方式,一种是通知模式,外部调用方可以注册日志listener方法给到日志处理器,当日志处理器的writeLog方法被调用后后会调用所有的listener进行通知
-4. 一种是主动拉取模式,通过调用LogProcessor的getLog方法主动获取日志
-
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Reheater_architecture.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Reheater_architecture.md
deleted file mode 100644
index 0eba15a..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Reheater_architecture.md
+++ /dev/null
@@ -1,12 +0,0 @@
-## **Orchestrator Reheater架构**
-
-### **一. Reheater概念**
-
-Orchestrator-Reheater模块是Orchestrator的重放模块,用于在执行过程中,动态调整JobGroup的执行计划,为JobGroup动态添加Job、Stage和Task。从而避免网络等原因引起的子任务失败。目前主要有任务相关的TaskReheater,包含重试类型的RetryTaskReheater
-
-### **二. Reheater架构图**
-
-![](../../Images/Architecture/orchestrator/reheater/linkis-orchestrator-reheater-01.png)
-
-Reheater在任务执行过程中,会收到ReheaterEvent,从而会对编排后的PhysicalTree进行调整,动态添加Job、Stage、Task。目前常用的有TaskReheater,包含重试类型的RetryTaskReheater、切换类型的SwitchTaskReheater,以及执行失败任务时的任务信息写入PlaybackService的PlaybackWrittenTaskReheater。
-
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Transform_architecture.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Transform_architecture.md
deleted file mode 100644
index bbf0ef3..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Transform_architecture.md
+++ /dev/null
@@ -1,12 +0,0 @@
-## **Orchestrator-Transform架构**
-
-### **一. Transtform概念**
-
-Orchestrator中定义了任务调度编排不同阶段的结构,从ASTTree到LogicalTree,再到PhysicalTree,这些不同结构的转换,需要用到Transform模块。Transform模块定义了转换过程,Convert需要调用各种Transform,来进行任务结构的转换和生成。
-
-## **二. Transform架构**
-
-Transform嵌入在整个转换过程中,从Parser到Execution,每个阶段间会有Transform的实现类,分别将初始的JobReq转换成ASTTree、LogicalTree和PhysicalTree,PhysicalTree提交Execution执行。
-
-![](../../Images/Architecture/orchestrator/transform/linkis-orchestrator-transform-01.png)
-
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_architecture_doc.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_architecture_doc.md
deleted file mode 100644
index c4b14ad..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_architecture_doc.md
+++ /dev/null
@@ -1,113 +0,0 @@
-Orchestrator 整体架构设计
-===
-
-## 一. Orchestrator概念
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Orchestrator计算编排是Linkis1.0的核心价值实现,基于Orchestrator可以实现全栈引擎+丰富计算策略的支持,通过对用户提交的任务进行编排,可以实现对双读、双写、AB等策略类型进行支持。并通过和标签进行配合可以对多种任务场景进行支持:
-- 当Orchestrator模块和Entrance进行结合的时候,可以完成对0.X的交互式计算场景进行支持;
-- 当Orchestrator模块和引擎连接器EngineConn进行结合的时候,可以完成对常驻式和一次性作业场景进行支持;
-- 当Orchestrator模块和Linkis-Client进行对接时,作为RichClient可以对存储式作业场景进行支持,如支持Hbase的双读双写;
-
-![Orchestrator01](../../Images/Architecture/orchestrator/overall/Orchestrator01.png)
-
-## 二. Orchestrator整体架构:
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Orchestrator编排整体架构参考Apache Calcite的架构进行实现将一个任务编排划分了如下几步:
-- Converter(转换):完成对用户提交的JobReq(任务请求)装换为编排的Job,该步骤会对用户提交的Job进行参数检查和信息补充,如变量替换等
-- Pareser(解析):完成对Job的解析,并拆封装Job的Stage信息,形成Ast树
-- Validator(校验): 完成对Job和Stage的信息检验,如必须的Label信息检验
-- Planner(计划):完成对Ast阶段的Job和Stage的对象转换为Logical计划,形成Logical树,将Job和Stage分别转换为LogicalTask,并封装执行单元的LogicalTask,如对于交互式的CodeLogicalUnit,转为为CodeLogicalUnitTask
-- Optimizer(优化阶段):完成对Logical Tree转换为Physical Tree,并对树进行优化,如命中缓存型的优化
-- Execution(执行):调度执行物理计划的Physical Tree,按照依赖进行执行
-- Reheater(再热):检测在执行阶段的可重试的失败Task(如ReTryExecption),调整物理计划重新执行
-- Plugins(插件): 插件模块,主要用于Orchestrator对接外部模块进行使用,如EngineConnManagerPlugin用于对接LinkisManager和EngineConn完成对引擎的申请和任务执行\
-
-![Orchestrator_arc](../../Images/Architecture/orchestrator/overall/Orchestrator_arc.png)
-
-## 三. Orchestrator实体流转:
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Orchestrator编排过程中,主要是完成对输入的JobReq进行转换,主要分为AST、Logical、Physical三个阶段,最终执行的是Physical阶段的ExecTask。整个过程如下:
-
-![orchestrator_entity](../../Images/Architecture/orchestrator/overall/orchestrator_entity.png)
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;以下以交互式场景为例简单介绍,下面以codeLogicalUnit:`select * from test`的交互式Job为例,可视化各个阶段的树形图
-1. AST阶段:由Parser对ASTJob进行解析后的结构,Job和Stage有属性进行关联,Job里面有getStage信息,Stage里面有Job信息,不是通过parents和children决定(parents和children都为null):
-
-![Orchestrator_ast](../../Images/Architecture/orchestrator/overall/Orchestrator_ast.png)
-
-2. Logical阶段:由Plan对ASTJob进行转换后的结构,包含Job/stage/CodeTask,存在树形结构,关系由parents和children进行决定\,start和end由Desc决定:
-
-![Orchestrator_Logical](../../Images/Architecture/orchestrator/overall/Orchestrator_Logical.png)
-
-3. Physical阶段:由Optimizer转换后的结构,包含Job/Stage/Code ExecTask,存在树形结构,关系由parents和children进行决定\,start和end由Desc决定:
-
-![Orchestrator_Physical](../../Images/Architecture/orchestrator/overall/Orchestrator_Physical.png)
-
-## 四. Orchestrator Core各层级模块详解
-
-### 4.1 Converter模块:
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Converter主要用于将一个JobReq转换成一个Job,并完成对JobReq的检查和补充、包括参数检查、变量补充等。JobReq是用户实际提交的一个作业,这个作业可以是交互式作业(这时Orchestrator会与Entrance进行集成,对外提供交互式访问能力),也可以是常驻式/一次性作业(这时Orchestrator会与EngineConn进行集成,直接对外提供执行能力),也可以是存储式作业,这时Orchestrator会与Client进行集成,将直接与EngineConn进行对接。相对应的JobReq有很多实现类,基于场景类型分为ComputationJobReq(交互式)、ClusteredJobReq(常驻式)和StorageJobReq(存储型)。
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 这里需区分一下Orchestrator和Entrance的职责范围,一般情况下,Orchestrator对于RichClient、Entrance、EngineConn是必需单元,但是Entrance则不是必需的,所以Converter会提供一系列的检查拦截单元,用于自定义变量的替换和CS相关文件、自定义变量的补充。
-
-### 4.2 Parser模块:
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Parser主要用于将一个Job解析为多个Stage,按照不能的计算策略,在Parser阶段生成的AstTree也会不相同,对于普通的交互式计算策略Parser会将Job解析为一个Stage,但是对于双读、双写等计算策略下会将Job解析为多个Stage,每个Stage对应的操作相同去操作不同的集群。
-
-### 4.3 Validator模块:
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; AstTree在plan生成可执行的Tasks之前,还需先经过Validator。Validator主要用于校验Ast阶段的Job和Stage的合法性,并补充一些必要的信息,例如必要标签信息检查和补充。
-
-### 4.4 Planner模块
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Planner模块主要完成对Ast阶段的Job和Stage转换为对应的LogicalTask,形式LogicalTree。Planner会构造LogicalTree,将Job解析为JobEndTask和JobStartTask,将Stage解析为StageEndTask和StageStartTask,以及将实际的执行单元转换为具体的LogicalTask(如对于交互式的CodeLogicalUnit,转为为CodeLogicalUnitTask)。如下图:
-
-![Orchestrator_Logical](../../Images/Architecture/orchestrator/overall/Orchestrator_Logical.png)
-
-### 4.5 Optimizer模块
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Optimizer是Orchestrator的优化器,主要用于优化整个LogicalTree转换为PhysicalTree的ExecTask。根据优化的类型不同,Optimizer主要分为两个步骤:第一步是完成对logciaTree的优化,第二部完成对LogicalTree的转换。已经实现的优化策略主要有以下:
-- CacheTaskOptimizer(TaskOptimizer级):判断ExecTask是否可以使用缓存的执行结果,如果命中cache,则调整Tree。
-- YarnQueueOptimizer(TaskOptimizer级):如果用户指定提交的队列现在资源很紧张,且该用户存在其他可用空闲队列,自动为用户做优化。
-- PlaybackOptimizer(TaskOptimizer级):主要用于支持回放。即多写时,如果某个集群存在需要回放的任务,先根据任务时延要求,进行一定数量的任务回放,以便追回。同时对该任务进行关联分析,如果与历史回放任务关联则改为将任务信息写入PlaybackService(或如果是select类别的不执行),不关联则继续执行。
-- ConfigurationOptimizer(StageOptimizer级):优化用户的运行时参数或启动参数。
-
-
-### 4.6 Execution模块
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Execution是Orchestrator的执行模块,用于执行PhysicalTree,支持同步执行和异步执行,执行的过程中通过解析PhysicalTree进行依赖执行。
-
-### 4.7 Reheater模块
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Reheater再热允许Execution在执行过程中,动态调整PhysicalTree的执行计划,比如为申请引擎失败的ExecTask发起重新执行等
-
-## 五. Orchestrator编排流程
-
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 对于使用方来说整体编排分为三步:
-1. 第一步通过Orchestrator获取OrchestratorSession该对象类似于SparkSession一般进程单例
-2. 第二步通过OrchestratorSession进行编排,获取Orchestration对象,编排后返回的唯一对象
-3. 第三步通过调用Orchestration 的执行方法机进行支持,支持异步和同步执行模式
-整体流程如下图所示:
-
-![Orchestrator_progress](../../Images/Architecture/orchestrator/overall/Orchestrator_progress.png)
-
-## 六. Orchestrator常用物理计划示例
-
-1. 交互式分析,拆封成两个Stage的类型
-
-![Orchestrator_computation](../../Images/Architecture/orchestrator/overall/Orchestrator_computation.png)
-
-2. Command等只有function类的ExecTask
-
-![Orchestrator_command](../../Images/Architecture/orchestrator/overall/Orchestrator_command.png)
-
-3. Reheat情型
-
-![Orchestrator_reheat](../../Images/Architecture/orchestrator/overall/Orchestrator_reheat.png)
-
-4. 事务型
-
-![Orchestrator_transication](../../Images/Architecture/orchestrator/overall/Orchestrator_transication.png)
-
-5. 命中缓存型
-
-![Orchestrator_cache](../../Images/Architecture/orchestrator/overall/Orchestrator_cache.png)
-
-
-
-
-
-
-
-
-
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/README.md
deleted file mode 100644
index 4ca01b2..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/README.md
+++ /dev/null
@@ -1,55 +0,0 @@
-## Orchestrator 架构设计
-
-Linkis的计算编排模块,提供了全栈引擎和丰富的计算策略的支持,通过编排方式实现对双读、双写、AB等策略的支持;并且通过与标签系统整合实现对多种作业场景,例交互式计算作业、常驻式作业以及存储式作业等场景的支持。
-
-#### 架构示意图
-
-![Orchestrator架构图](../../Images/Architecture/orchestrator/linkis_orchestrator_architecture.png)  
-
-
-#### 模块介绍
-
-##### 1. Orchestrator-Core
-
-核心模块,将任务编排拆分了约七个步骤,分别对应的接口为Converter(转换), Parser(解析), Validator(校验), Planner(计划), Optimizer(优化),Execution(执行), Reheater(再热/重试),之间的实体流转图见如下:  
-![Orchestrator实体流转](../../Images/Architecture/orchestrator/overall/orchestrator_entity.png)
-
-核心的接口定义如下:
-
-| 核心顶层接口/类 | 核心功能 |
-| --- | --- | 
-| `ConverterTransform`| 完成对用户提交的req请求转换为编排的Job,同时会对请求做参数检查和信息补充 |
-| `ParserTransform`| 完成对Job的解析和拆分,拆分成多个Stage阶段信息,构成AST树 |
-| `ValidatorTransform` | 对Job和Stage的信息校验,例如对附带的Label信息的校验 |
-| `PlannerTransform` | 将AST阶段的Job和Stage转换成逻辑计划,生成Logical树,其中Job和Stage分别转换为LogicalTask |
-| `OptimizerTransform` | 完成Logical Tree到 Physical Tree的转换,既物理计划转换, 转换前还会对AST树做优化处理 |
-| `Execution` | 调度执行物理计划的Physical Tree,处理执行子作业之间的依赖关系 |
-| `ReheaterTransform` | 对Execution执行过程中可重试的失败作业的重新调度执行 |
-
-##### 2. Computation-Orchestrator
-
-是针对交互式计算场景下Orchestrator的标准实现,对抽象接口都做了默认实现,其中包含例如对SQL等语言代码的转换规则集合,以及请求执行交互式作业的具体逻辑。
-典型的类定义如下:
-
-| 核心顶层接口/类 | 核心功能 |
-| --- | --- | 
-| `CodeConverterTransform`| 针对请求中附带的代码信息的解析转换, 例如 Spark Sql, Hive Sql, Shell 和 Python|
-| `CodeStageParserTransform` | 解析拆分Job,针对CodeJob,既附带代码信息的Job|
-| `EnrichLabelParserTransform` | 解析拆分Job的同时填入标签信息 |
-| `TaskPlannerTransform` | 交互式计算场景下,将Job拆分成的Stage信息转化为逻辑计划,即Logical Tree |
-| `CacheTaskOptimizer` | 对逻辑计划中的AST树增加缓存节点,优化后续的执行 |
-| `ComputePhysicalTransform` | 交互式计算场景下,将逻辑计划转化为物理计划 |
-| `CodeLogicalUnitExecTask` | 交互式计算场景下,物理计划中的最小执行单元|
-| `ComputationTaskExecutionReceiver` | Task执行的RPC回调类,接收任务的状态、进度等回调信息|
-
-##### 3. Code-Orchestrator
-
-是针对常驻型和存储型作业场景下Orchestrator的标准实现
-
-##### 4. Plugins/Orchestrator-ECM-Plugin
-
-提供了Orchestrator对接LinkisManager 和 EngineConn所需要的接口方法,简述如下:
-
-| 核心顶层接口/类 | 核心功能 |
-| --- | --- | 
-| `EngineConnManager` | 提供了请求EngineConn资源,向EngineConn提交执行请求的方法,并主动缓存了可用的EngineConn|
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/BML.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/BML.md
deleted file mode 100644
index e385cad..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/BML.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
-## 背景
-
-BML(物料库服务)是linkis的物料管理系统,主要用来存储用户的各种文件数据,包括用户脚本、资源文件、第三方Jar包等,也可以存储引擎运行时需要使用到的类库。
-
-具备以下功能点:
-
-1)、支持各种类型的文件。支持文本形式和二进制形式的文件,如果是在大数据领域的用户,可以将他们的脚本文件、物料压缩包都可以存储到本系统中。
-
-2)、服务无状态,多实例部署,做到服务高可用。本系统在部署的时候,可以进行多实例部署,每个实例对外独立提供服务,不会互相干扰,所有的信息都是存储在数据库中进行共享。
-
-3)、使用方式多样。提供Rest接口和SDK两种方式,用户可以根据自己的需要进行选择。
-
-4)、文件采用追加方式,避免过多的HDFS小文件。HDFS小文件多会导致HDFS整体性能的下降,我们采用了文件追加的方式,将多个版本的资源文件合成一个大文件,有效减少了HDFS的文件数量。
-
-5)、精确权限控制,用户资源文件内容安全存储。资源文件往往会有重要的内容,用户只希望自己可读
-
-6)、提供了文件上传、更新、下载等操作任务的生命周期管理。
-
-## 架构图
-
-![BML架构图](../../Images/Architecture/bml-02.png)
-
-## 架构说明
-
-1、Service层 包含资源管理、上传资源、下载资源、共享资源还有工程资源管理。
-
-资源管理负责资源的增删改查操作,访问权限控制,文件是否过期等基本操作。
-
-2、文件版本控制
-每个BML资源文件都是具有版本信息的,同一个资源每次更新操作都会产生一个新的版本,当然也支持历史版本的查询和下载操作。BML使用版本信息表记录了每个版本的资源文件HDFS存储的偏离位置和大小,可以在一个HDFS文件上存储多个版本的数据。
-
-3、资源文件存储
-主要使用HDFS文件作为实际的数据存储,HDFS文件可以有效保证物料库文件不被丢失,文件采用追加方式,避免过多的HDFS小文件。
-
-### 核心流程
-
-**上传文件:**
-
-1.  判断用户上传文件的操作类型,属于首次上传还是更新上传,如果是首次上传需要新增一条资源信息记录,系统已经为这个资源生成了一个全局唯一标识的resource_id和一个资源放置的位置resource_location。资源A的第一个版本A1需要在HDFS文件系统中resource_location位置进行存储。存储完之后,就可以得到第一个版本记为V00001,如果是更新上传需要查找上次最新的版本。
-
-2.  上传文件流到指定的HDFS文件,如果是更新则采用文件追加的方式加到上次内容的末尾。
-
-3.  新增一条版本记录,每次上传都会产生一条新的版本记录。除了记录这个版本的元数据信息外,最重要的是记录了该版本的文件的存储位置,包括文件路径,起始位置,结束位置。
-
-**下载文件:**
-
-1.  用户下载资源的时候,需要指定两个参数一个是resource_id,另外一个是版本version,如果不指定version的话,默认下载最新版本。
-
-2.  用户传入resource_id和version两个参数到系统之后,系统查询resource_version表,查到对应的resource_location和start_byte和end\_byte进行下载,通过流处理的skipByte方法,将resource\_location的前(start_byte-1)个字节跳过,然后读取到end_byte
-    字节数。读取成功之后,将流信息返回给用户。
-
-3.  在resource_download_history中插入一条下载成功的记录
-
-## 数据库设计
-
-1、资源信息表(resource)
-
-| 字段名            | 作用                         | 备注                             |
-|-------------------|------------------------------|----------------------------------|
-| resource_id       | 全局唯一标识一个资源的字符串 | 可以采用UUID进行标识             |
-| resource_location | 存放资源的位置               | 例如 hdfs:///tmp/bdp/\${用户名}/ |
-| owner             | 资源的所属者                 | 例如 zhangsan                    |
-| create_time       | 记录创建时间                 |                                  |
-| is_share          | 是否共享                     | 0表示不共享,1表示共享           |
-| update\_time      | 资源最后的更新时间           |                                  |
-| is\_expire        | 记录资源是否过期             |                                  |
-| expire_time       | 记录资源过期时间             |                                  |
-
-2、资源版本信息表(resource_version)
-
-| 字段名            | 作用               | 备注     |
-|-------------------|--------------------|----------|
-| resource_id       | 唯一标识资源       | 联合主键 |
-| version           | 资源文件的版本     |          |
-| start_byte        | 资源文件开始字节数 |          |
-| end\_byte         | 资源文件结束字节数 |          |
-| size              | 资源文件大小       |          |
-| resource_location | 资源文件放置位置   |          |
-| start_time        | 记录上传的开始时间 |          |
-| end\_time         | 记录上传的结束时间 |          |
-| updater           | 记录更新用户       |          |
-
-3、资源下载历史表(resource_download_history)
-
-| 字段        | 作用                      | 备注                           |
-|-------------|---------------------------|--------------------------------|
-| resource_id | 记录下载资源的resource_id |                                |
-| version     | 记录下载资源的version     |                                |
-| downloader  | 记录下载的用户            |                                |
-| start\_time | 记录下载时间              |                                |
-| end\_time   | 记录结束时间              |                                |
-| status      | 记录是否成功              | 0表示成功,1表示失败           |
-| err\_msg    | 记录失败原因              | null表示成功,否则记录失败原因 |
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md
deleted file mode 100644
index d28cbe2..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md
+++ /dev/null
@@ -1,95 +0,0 @@
-## **CSCache架构**
-### **需要解决的问题**
-
-###  1.1. 内存结构需要解决的问题:
-
-1. 支持按ContextType进行拆分:加快存储和查询性能
-
-2. 支持按不同得ContextID进行拆分:需要完成ContextID见元数据隔离
-
-3. 支持LRU:按照特定算法进行回收
-
-4. 支持按关键字进行检索:支持通过关键字进行索引
-
-5. 支持索引:支持直接通过ContextKey进行索引
-
-6. 支持遍历:需要支持通过按照ContextID、ContextType进行遍历
-
-###  1.2 加载与解析需要解决的问题:
-
-1. 支持将ContextValue解析成内存数据结构:需要完成对ContextKey和value解析出对应的关键字。
-
-2. 需要与与Persistence模块进行对接完成ContextID内容的加载与解析
-
-###  1.3 Metric和清理机制需要解决的问题:
-
-1. 当JVM内存不够时能够基于内存使用和使用频率的清理
-
-2. 支持统计每个ContextID的内存使用情况
-
-3. 支持统计每个ContextID的使用频率
-
-## **ContextCache架构**
-
-ContextCache的架构如下图展示:
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png)
-
-1.  ContextService:完成对外接口的提供,包括增删改查;
-
-2.  Cache:完成对上下文信息的存储,通过ContextKey和ContextValue进行映射存储
-
-3.  Index:建立的关键字索引,存储的是上下文信息的关键字和ContextKey的映射;
-
-4.  Parser:完成对上下文信息的关键字解析;
-
-5.  LoadModule当ContextCache没有对应的ContextID信息时从持久层完成信息的加载;
-
-6.  AutoClear:当Jvm内存不足时完成对ContextCache进行按需清理;
-
-7.  Listener:用于手机ContextCache的Metric信息,如:内存占用、访问次数。
-
-## **ContextCache存储结构设计**
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png)
-
-ContextCache的存储结构划分为了三层结构:
-
-**ContextCach:**存储了ContextID和ContextIDValue的映射关系,并能够完成ContextID按照LRU算法进行回收;
-
-**ContextIDValue:**拥有存储了ContextID的所有上下文信息和索引的CSKeyValueContext。并统计ContestID的内存和使用记录。
-
-**CSKeyValueContext:**包含了按照类型存储并支持关键词的CSInvertedIndexSet索引集,还包含了存储ContextKey和ContextValue的存储集CSKeyValueMapSet。
-
-CSInvertedIndexSet:通过CSType进行分类存储关键词索引
-
-CSKeyValueMapSet:通过CSType进行分类存储上下文信息
-
-## **ContextCache UML类图设计**
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png)
-
-## **ContextCache 时序图**
-
-下面的图绘制了以ContextID、KeyWord、ContextType去ContextCache中查对应的ContextKeyValue的整体流程。
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png)
-
-说明:其中ContextIDValueGenerator会去持久层拉取ContextID的Array[ContextKeyValue],并通过ContextKeyValueParser解析ContextKeyValue的关键字存储索引和内容。
-
-ContextCacheService提供的其他接口流程类似,这里不再赘述。
-
-## **KeyWord解析逻辑**
-
-ContextValue具体的实体Bean需要在对应可以作为keyword的get方法上面使用注解\@keywordMethod,比如Table的getTableName方法必须加上\@keywordMethod注解。
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png)
-
-ContextKeyValueParser在解析ContextKeyValue的时候,会去扫描传入的具体对象的所有被KeywordMethod修饰的注解并调用该get方法获得返回对象toString并会通过用户可选的规则进行解析,存入keyword集合里面。规则有分隔符,和正则表达式
-
-注意事项:
-
-1.  该注解会定义到cs的core模块
-
-2.  被修饰的Get方法不能带参数
-
-3.  Get方法的返回对象的toSting方法必须返回的是关键字
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md
deleted file mode 100644
index d72a37c..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md
+++ /dev/null
@@ -1,61 +0,0 @@
-## **CSClient设计的思路和实现**
-
-
-CSClient是每一个微服务和CSServer组进行交互的客户端,CSClient需要满足下面的功能。
-
-1.  微服务向cs-server申请一个上下文对象的能力
-
-2.  微服务向cs-server注册上下文信息的能力
-
-3.  微服务能够向cs-server更新上下文信息的能力
-
-4.  微服务向cs-server获取上下文信息的能力
-
-5.  某一些特殊的微服务能够嗅探到cs-server中已经修改了上下文信息的操作
-
-6.  CSClient在csserver集群都失败的情况下能够给出明确的指示
-
-7.  CSClient需要提供复制csid1所有上下文信息为一个新的csid2用来提供给调度执行的
-
->   总体的做法是通过的linkis自带的linkis-httpclient进行发送http请求,通过实现各种Action和Result的实体类进行发送请求和接收响应。
-
-### 1. 申请上下文对象的能力
-
-申请上下文对象,例如用户在前端新建了一条工作流,dss-server需要向dss-server申请一个上下文对象,申请上下文对象的时候,需要将工作流的标识信息(工程名、工作流名)通过CSClient发送到CSServer中(这个时候gateway应该是随机发送给一个的,因为此时没有携带csid的信息),申请上下文一旦反馈到正确的结果之后,就会返回一个csid和该工作流进行绑定。
-
-### 2. 注册上下文信息的能力
-
->   注册上下文的能力,例如用户在前端页面上传了资源文件,文件内容上传到dss-server,dss-server将内容存储到bml中,然后需要将从bml中获得的resourceid和version注册到cs-server中,此时需要使用到csclient的注册的能力,注册的能力是通过传入csid,以及cskey
->   和csvalue(resourceid和version)进行注册。
-
-### 3. 更新注册的上下文的能力
-
->   更新上下文信息的能力。举一个例子,比如一个用户上传了一个资源文件test.jar,此时csserver已经有注册的信息,如果用户在编辑工作流的时候,将这个资源文件进行了更新,那么cs-server需要将这个内容进行更新。此时需要调用csclient的更新的接口
-
-### 4. 获取上下文的能力
-
-注册到csserver的上下文信息,在变量替换、资源文件下载、下游节点调用上游节点产生信息的时候,都是需要被读取的,例如engine端在执行代码的时候,需要进行下载bml的资源,此时需要通过csclient和csserver进行交互,获取到文件在bml中的resourceid和version然后再进行下载。
-
-### 5. 某一些特殊的微服务能够嗅探到cs-server中已经修改了上下文信息的操作
-
-这个操作是基于以下的例子,比如一个widget节点和上游的sql节点是有很强的联动性,用户在sql节点中写了一个sql,sql的结果集的元数据为a,b,c三个字段,后面的widget节点绑定了这个sql,能够在页面中进行对这三个字段的编辑,然后用户更改了sql的语句,元数据变成了a,b,c,d四个字段,此时用户需要手动刷新一下才行。我们希望做到如果脚本做到了改变,那么widget节点能够自动的进行将元数据进行更新。这个一般采用的是listener模式,为了简便,也可以采用心跳的机制进行轮询。
-
-### 6. CSClient需要提供复制csid1所有上下文信息为一个新的csid2用来提供给调度执行的
-
-用户一旦发布一个工程,就是希望对这个工程的所有信息进行类似于git打上一个tag,这里的资源文件、自定义变量这些都是不会再变的,但是有一些动态信息,如产生的结果集等还是会更新csid的内容。所以csclient需要提供一个csid1复制所有上下文信息的接口以供微服务进行调用
-
-## **ClientListener模块的实现**
-
-对于一个client而言,有时候会希望在尽快的时间内知道某一个csid和cskey在cs-server中发生了改变,例如visualis的csclient需要能够知道上一个sql节点进行了改变,那么需要被通知到,服务端有一个listener模块,而客户端也需要一个listener模块,例如一个client希望能够监听到某一个csid的某一个cskey的变化,那么他需要将该cskey注册到对应的csserver实例中的callbackEngine,后续的比如有另外一个client进行更改了该cskey的内容,第一个client进行了heatbeat的时候,callbackengine就需要将这个信息通知到已经client监听的所有cskey,这样的话,第一个client就知道了该cskey的内容已经发生了变化。当heatbeat返回数据的时候,我们就应该通知到注册到ContextClientListenerBus的所有的listener进行使用on方法
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png)
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png)
-
-## **GatewayRouter的实现**
-
-
-Gateway插件实现Context进行转发Gateway的插件的转发逻辑是通过的GatewayRouter进行的,需要分成两种方式进行,第一种是申请一个context上下文对象的时候,这个时候,CSClient携带的信息中是没有包含csid的信息的,此时的判断逻辑应该是通过eureka的注册信息,第一次发送的请求将会随机进入到一个微服务实例中。  
-第二种情况是携带了ContextID的内容,我们需要将csid进行解析,解析的方式就是通过字符串切割的方法,获取到每一个instance的信息,然后通过instance的信息通过eureka判断是否还存在这个微服务,如果是存在的,就往这个微服务实例进行发送
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md
deleted file mode 100644
index 05a165f..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md
+++ /dev/null
@@ -1,86 +0,0 @@
-## **CS HA架构设计**
-
-### 1,CS HA架构概要
-
-#### (1)CS HA架构图
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png)
-
-#### (2)要解决的问题
-
--   Context instance对象的HA
-
--   Client创建工作流时生成CSID请求
-
--   CS Server的别名列表
-
--   CSID统一的生成和解析规则
-
-#### (3)主要设计思路
-
-①负载均衡
-
-当客户端创建新的工作流时,等概率随机请求到某台Server的HA模块生成新的HAID,HAID信息包含该主Server信息(以下称主instance),和备选instance,其中备选instance为剩余Server中负载最低的instance,以及一个对应的ContextID。生成的HAID与该工作流绑定且被持久化到数据库,并且随后该工作流所有变更操作请求都将发送至主instance,实现负载的均匀分配。
-
-②高可用
-
-在后续操作中,当客户端或者gateway判定主instance不可用时,会将操作请求转发至备instance处理,从而实现服务的高可用。备instance的HA模块会根据HAID信息首先验证请求合法性。
-
-③别名机制
-
-对机器采用别名机制,HAID中包含的Instance信息采用自定义别名,后台维护别名映射队列。在于客户端交互时采用HAID,而与后台其它组件交互则采用ContextID,在实现具体操作时采用动态代理机制,将HAID转换为ContextID进行处理。
-
-### 2,模块设计
-
-#### (1)模块图
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png)
-
-#### (2)具体模块
-
-①ContextHAManager模块
-
-提供接口供CS Server调用生成CSID及HAID,并提供基于动态代理的别名转换接口;
-
-调用持久化模块接口持久化CSID信息;
-
-②AbstractContextHAManager模块
-
-ContextHAManager的抽象,可支持实现多种ContextHAManager;
-
-③InstanceAliasManager模块
-
-RPC模块提供Instance与别名转换接口,维护别名映射队列,并提供别名与CS
-Server实例的查询;提供验证主机是否有效接口;
-
-④HAContextIDGenerator模块
-
-生成新的HAID,并且封装成客户端约定格式返回给客户端。HAID结构如下:
-
-\${第一个instance长度}\${第二个instance长度}{instance别名1}{instance别名2}{实际ID},实际ID定为ContextID
-Key;
-
-⑤ContextHAChecker模块
-
-提供HAID的校验接口。收到的每个请求会校验ID格式是否有效,以及当前主机是否为主Instance或备Instance:如果是主Instance,则校验通过;如果为备Instance,则验证主Instance是否失效,主Instance失效则验证通过。
-
-⑥BackupInstanceGenerator模块
-
-生成备用实例,附加在CSID信息里;
-
-⑦MultiTenantBackupInstanceGenerator接口
-
-(保留接口,暂不实现)
-
-### 3. UML类图
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png)
-
-### 4. HA模块操作时序图
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png)
-
-第一次生成CSID:
-由客户端发出请求,Gateway转发到任一Server,HA模块生成HAID,包含主Instance和备instance及CSID,完成工作流与HAID的绑定。
-
-当客户端发送变更请求时,Gateway判定主Instance失效,则将请求转发到备Instance进行处理。备Instance上实例验证HAID有效后,加载Instance并处理请求。
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md
deleted file mode 100644
index 74329c1..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md
+++ /dev/null
@@ -1,33 +0,0 @@
-## **Listener架构**
-
-在DSS中,当某个节点更改了它的元数据信息后,则整个工作流的上下文信息就发生了改变,我们期望所有的节点都能感知到变化,并自动进行元数据更新。我们采用监听模式来实现,并使用心跳机制进行轮询,保持上下文信息的元数据一致性。
-
-### **客户端 注册自己、注册CSKey及更新CSKey过程**
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png)
-
-主要过程如下:
-
-1、注册操作:客户端client1、client2、client3、client4通过HTPP请求分别向csserver注册自己以及想要监听的CSKey,Service服务通过对外接口获取到callback引擎实例,注册客户端及其对应的CSKeys。
-
-2、更新操作:如ClientX节点更新了CSKey内容,Service服务则更新ContextCache缓存的CSKey,ContextCache将更新操作投递给ListenerBus,ListenerBus通知具体的listener进行消费(即ContextKeyCallbackEngine去更新Client对应的CSKeys),超时未消费的事件,会被自动移除。
-
-3、心跳机制:
-
-所有Client通过心跳信息探测ContextKeyCallbackEngine中CSKeys的值是否发生了变化。
-
-ContextKeyCallbackEngine通过心跳机制返回更新的CSKeys值给所有已注册的客户端。如果有客户端心跳超时,则移除该客户端。
-
-### **Listener UM类图**
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png)
-
-接口:ListenerManager
-
-对外:提供ListenerBus,用于投递事件。
-
-对内:提供 callback引擎,进行事件的具体注册、访问、更新,及心跳处理等逻辑
-
-## **Listener callbackengine时序图**
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md
deleted file mode 100644
index 13fae2f..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md
+++ /dev/null
@@ -1,8 +0,0 @@
-## **CSPersistence架构**
-
-### Persistence UML图
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png)
-
-
-Persistence模块主要定义了ContextService持久化相关操作。实体主要包含CSID、ContextKeyValue相关、CSResource相关、CSTable相关。
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md
deleted file mode 100644
index 073cfd7..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md
+++ /dev/null
@@ -1,127 +0,0 @@
-## **CSSearch架构**
-### **总体架构**
-
-如下图所示:
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png)
-
-1.  ContextSearch:查询入口,接受Map形式定义的查询条件,根据条件返回相应的结果。
-
-2.  构建模块:每个条件类型对应一个Parser,负责将Map形式的条件转换成Condition对象,具体通过调用ConditionBuilder的逻辑实现。具有复杂逻辑关系的Condition会通过ConditionOptimizer进行基于代价的算法优化查询方案。
-
-3.  执行模块:从Cache中,筛选出与条件匹配的结果。根据查询目标的不同,分为Ruler、Fetcher和Match而三种执行模式,具体逻辑在后文描述。
-
-4.  评估模块:负责条件执行代价的计算和历史执行状况的统计。
-
-### **查询条件定义(ContextSearchCondition)**
-
-一个查询条件,规定了该如何从一个ContextKeyValue集合中,筛选出符合条件的那一部分。查询条件可以通过逻辑运算构成更加复杂的查询条件。
-
-1.  支持ContextType、ContextScope、KeyWord的匹配
-
-    1.  分别对应一个Condition类型
-
-    2.  在Cache中,这些都应该有相应的索引
-
-2.  支持对key的contains/regex匹配模式
-
-    1.  ContainsContextSearchCondition:包含某个字符串
-
-    2.  RegexContextSearchCondition:匹配某个正则表达式
-
-3.  支持or、and和not的逻辑运算
-
-    1.  一元运算UnaryContextSearchCondition:
-
->   支持单个参数的逻辑运算,比如NotContextSearchCondition
-
-1.  二元运算BinaryContextSearchCondition:
-
->   支持两个参数的逻辑运算,分别定义为LeftCondition和RightCondition,比如OrContextSearchCondition和AndContextSearchCondition
-
-1.  每个逻辑运算均对应一个上述子类的实现类
-
-2.  该部分的UML类图如下:
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png)
-
-### **查询条件的构建**
-
-1.  支持通过ContextSearchConditionBuilder构建:构建时,如果同时声明多项ContextType、ContextScope、KeyWord、contains/regex的匹配,自动以And逻辑运算连接
-
-2.  支持Condition之间进行逻辑运算,返回新的Condition:And,Or和Not(考虑condition1.or(condition2)的形式,要求Condition顶层接口定义逻辑运算方法)
-
-3.  支持通过每个底层实现类对应的ContextSearchParser从Map构建
-
-### **查询条件的执行**
-
-1.  查询条件的三种作用方式:
-
-    1.  Ruler:从一个Array中筛选出符合条件的ContextKeyValue子Array
-
-    2.  Matcher:判断单个ContextKeyValue是否符合条件
-
-    3.  Fetcher:从ContextCache里筛选出符合条件的ContextKeyValue的Array
-
-2.  每个底层的Condition都有对应的Execution,负责维护相应的Ruler、Matcher、Fetcher。
-
-### **查询入口ContextSearch**
-
-提供search接口,接收Map作为参数,从Cache中筛选出对应的数据。
-
-1.  通过Parser,将Map形式的条件转换为Condition对象
-
-2.  通过Optimizer,获取代价信息,并根据代价信息确定查询的先后顺序
-
-3.  通过对应的Execution,执行相应的Ruler/Fetcher/Matcher逻辑后,得到搜索结果
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png)
-
-### **查询优化**
-
-1.  OptimizedContextSearchCondition维护条件的Cost和Statistics信息:
-
-    1.  Cost信息:由CostCalculator负责判断某个Condition是否能够计算出Cost,如果可以计算,则返回对应的Cost对象
-
-    2.  Statistics信息:开始/结束/执行时间、输入行数、输出行数
-
-2.  实现一个CostContextSearchOptimizer,其optimize方法以Condition的代价为依据,对Condition进行调优,转换为一个OptimizedContextSearchCondition对象。具体逻辑描述如下:
-
-    1.  将一个复杂的Condition,根据逻辑运算的组合,拆解成一个树形结构,每个叶子节点均为一个最基本的简单Condition;每个非叶子节点均为一个逻辑运算。
-
->   如下图所示的树A,就是一个由ABCDE这五个简单条件,通过各种逻辑运算组合成的一个复杂条件。
-
-![](./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png)
-<center>(树A)</center>
-
-1.  这些Condition的执行,事实上就是深度优先、从左到右遍历这个树。而且根据逻辑运算的交换规律,Condition树中一个节点的子节点的左右顺序可以互换,因此可以穷举出所有可能的执行顺序下的所有可能的树。
-
->   如下图所示的树B,就是上述树A的另一个可能的顺序,与树A的执行结果完全一致,只是各部分的执行顺序有所调整。
-
-![](./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png)
-<center>(树B)</center>
-
-1.  针对每一个树,从叶子节点开始计算代价,归集到根节点,即为该树的最终代价,最终得出代价最小的那个树,作为最优执行顺序。
-
->   计算节点代价的规则如下:
-
-1.  针对叶子节点,每个节点有两个属性:代价(Cost)和权重(Weight)。Cost即为CostCalculator计算出的代价,Weight是根据节点执行先后顺序的不同赋予的,当前默认左边为1,右边为0.5,后续看如何调整(赋予权重的原因是,左边的条件在一些情况下已经可以直接决定整个组合逻辑的匹配与否,所以右边的条件并非所有情况下都要执行,实际开销就需要减少一定的比例)
-
-2.  针对非叶子节点,Cost=所有子节点的Cost×Weight的总和;Weight的赋予逻辑与叶子节点一致。
-
->   以树A和树B为例子,分别计算出这两个树的代价,如下图所示,节点中的数字为Cost\|Weight,假设ABCDE这5个简单条件的Cost为10、100、50、10和100。由此可以得出,树B的代价小于树A,为更优方案。
-
-
-<center class="half">
-    <img src="./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png" width="300"> <img src="./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png" width="300">
-</center>
-
-1.  用CostCalculator衡量简单条件的Cost的思路:
-
-    1.  作用在索引上的条件:根据索引值的分布来确定代价。比如当条件A从Cache中get出来的Array长度是100,条件B为200,那么条件A的代价小于B。
-
-    2.  需要遍历的条件:
-
-        1.  根据条件本身匹配模式给出一个初始Cost:如Regex为100,Contains为10等(具体数值等实现时根据情况调整)
-
-        2.  根据历史查询的效率,在初始Cost的基础上进行不断调整后,得到实时的Cost。单位时间吞吐量
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md
deleted file mode 100644
index 7e66f9c..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md
+++ /dev/null
@@ -1,55 +0,0 @@
-## **ContextService架构**
-
-### **水平划分**
-
-从水平上划分为三个模块:Restful,Scheduler,Service
-
-#### Restful职责:
-
-    将请求封装为httpjob提交到Scheduler
-
-#### Scheduler职责:
-
-    通过httpjob的protocol的ServiceName找到相应的服务执行这个job
-
-#### Service职责:
-
-    真正执行请求逻辑的模块,封装ResponseProtocol,并唤醒Restful中wait的线程
-
-### **垂直划分**
-从垂直上划分为4个模块:Listener,History,ContextId,Context:
-
-#### Listener职责:
-
-1.  负责Client端的注册和绑定(写入数据库和在CallbackEngine中进行注册)
-
-2.  心跳接口,通过CallbackEngine返回Array[ListenerCallback]
-
-#### History职责:
-创建和移除history,操作Persistence进行DB持久化
-
-#### ContextId职责:
-主要是对接Persistence进行ContextId的创建,更新移除等操作
-
-#### Context职责:
-
-1.  对于移除,reset等方法,先操作Persistence进行DB持久化,并更新ContextCache
-
-2.  封装查询condition去ContextSearch模块获取相应的ContextKeyValue数据
-
-请求访问步骤如下图:
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png)
-
-## **UML类图** 
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png)
-
-## **Scheduler线程模型**
-
-需要保证Restful的线程池不被填满
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png)
-
-时序图如下:
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png)
-
-
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md
deleted file mode 100644
index fc64eb4..0000000
--- a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md
+++ /dev/null
@@ -1,124 +0,0 @@
-## **背景**
-
-### **什么是上下文Context?**
-
-保持某种操作继续进行的所有必需信息。如:同时看三本书,每本书已翻看的页码就是继续看这本书的上下文。
-
-### **为什么需要CS(Context Service)?**
-
-CS,用于解决一个数据应用开发流程,跨多个系统间的数据和信息共享问题。
-
-例如,B系统需要使用A系统产生的一份数据,通常的做法如下:
-
-1.  B系统调用A系统开发的数据访问接口;
-
-2.  B系统读取A系统写入某个共享存储的数据。
-
-有了CS之后,A和B系统只需要与CS交互,将需要共享的数据和信息写入到CS,需要读取的数据和信息从CS中读出即可,无需外部系统两两开发适配,极大降低了系统间信息共享的调用复杂度和耦合度,使各系统的边界更加清晰。
-
-## **产品范围**
-
-![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png)
... 14671 lines suppressed ...

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org