You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@linkis.apache.org by ca...@apache.org on 2022/08/25 01:45:10 UTC

[incubator-linkis-website] branch dev updated: add EngineConn Information Recording Architecture and service isolation Architecture docs (#492)

This is an automated email from the ASF dual-hosted git repository.

casion pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git


The following commit(s) were added to refs/heads/dev by this push:
     new 7dec79794e add EngineConn Information Recording Architecture and service isolation Architecture docs (#492)
7dec79794e is described below

commit 7dec79794efa9978e874ae36102b998e46258645
Author: peacewong <pe...@apache.org>
AuthorDate: Thu Aug 25 09:45:03 2022 +0800

    add EngineConn Information Recording Architecture and service isolation Architecture docs (#492)
    
    * add ec history arc doc
    
    * modify ec history arc doc
    
    * add Service isolation Design doc
    
    * add cs cleaner架构
    
    * update job execute doc
    
    * update release doc
---
 .../ec-resource-management-api.md                  | 163 ++++++++++-
 ...submission-preparation-and-execution-process.md | 315 ++++++++++++---------
 .../linkis-manager/app-manager.md                  |   2 +-
 .../linkis-manager/ec-history-arc.md               |  79 ++++++
 .../linkis-manager/resource-manager.md             |   2 +-
 .../microservice-governance-services/gateway.md    |   2 +-
 .../microservice-governance-services/overview.md   |   2 +-
 .../service_isolation.md                           | 197 +++++++++++++
 .../context-service/content-service-cleanup.md     |   2 +-
 .../context-service/context-service-cache.md       |   2 +-
 .../context-service/context-service-search.md      |   2 +-
 docs/release.md                                    |   2 +
 .../error-guide/interface.md                       |  48 ++--
 docs/tuning-and-troubleshooting/overview.md        |   5 +-
 docs/user-guide/sdk-manual.md                      | 143 +++-------
 .../ec-resource-management-api.md                  | 163 +++++++++++
 ...submission-preparation-and-execution-process.md | 307 ++++++++++----------
 .../linkis-manager/ec-history-arc.md               |  79 ++++++
 .../linkis-manager/label-manager.md                |   2 +-
 .../linkis-manager/resource-manager.md             |   2 +-
 .../service_isolation.md                           | 197 +++++++++++++
 .../context-service/content-service-cleanup.md     |   2 +-
 .../context-service/context-service-cache.md       |   2 +-
 .../context-service/context-service.md             |   2 +-
 .../current/release.md                             |   2 +
 .../error-guide/interface.md                       |  48 ++--
 .../current/tuning-and-troubleshooting/overview.md |   6 +-
 .../current/user-guide/sdk-manual.md               | 149 +++-------
 .../Architecture/Gateway/service_isolation_arc.png | Bin 0 -> 104076 bytes
 .../Gateway/service_isolation_console.png          | Bin 0 -> 210481 bytes
 .../Gateway/service_isolation_time.png             | Bin 0 -> 36324 bytes
 .../linkis_job_arc.png                             | Bin 0 -> 59783 bytes
 .../linkis_job_flow.png                            | Bin 0 -> 35370 bytes
 .../linkis_job_time.png                            | Bin 0 -> 62675 bytes
 .../Architecture/LinkisManager/ecHistoryArc.png    | Bin 0 -> 20293 bytes
 .../Architecture/LinkisManager/ecHistoryTime.png   | Bin 0 -> 50871 bytes
 36 files changed, 1357 insertions(+), 570 deletions(-)

diff --git a/docs/api/http/linkis-cg-linkismanager-api/ec-resource-management-api.md b/docs/api/http/linkis-cg-linkismanager-api/ec-resource-management-api.md
index 8c37e97cd5..05cda8bfd2 100644
--- a/docs/api/http/linkis-cg-linkismanager-api/ec-resource-management-api.md
+++ b/docs/api/http/linkis-cg-linkismanager-api/ec-resource-management-api.md
@@ -124,4 +124,165 @@ sidebar_position: 10
     "method": "",
     "status": 0
 }
-````
\ No newline at end of file
+````
+
+## Search EC information
+
+
+**Interface address**: `/api/rest_j/v1/linkisManager/ecinfo/ecrHistoryList`
+
+
+**Request method**: `GET`
+
+
+**Request data type**: `application/x-www-form-urlencoded`
+
+
+**Response data type**: `application/json`
+
+
+**Interface description**:<p>Get EC information</p>
+
+
+
+**Request Parameters**:
+
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|instance|instance|query|false|string|
+|creator|creator|query|false|string|
+|startDate|startDate|query|false|string|
+|endDate|endDate|query|false|string|
+|engineType|engineType|query|false|string|
+|pageNow|pageNow|query|false|Int|
+|pageSize|pageSize|query|false|Int|
+
+**Response Status**:
+
+
+| Status code | Description | schema |
+| -------- | -------- | ----- |
+|200|OK|Message|
+|401|Unauthorized|
+|403|Forbidden|
+|404|Not Found|
+
+
+**Response parameters**:
+
+
+| parameter name | parameter description | type | schema |
+| -------- | -------- | ----- |----- |
+|data|Dataset|object|
+|message|Description|string|
+|method|request url|string|
+|status|Status|integer(int32)|integer(int32)|
+
+
+**Sample Response**:
+````javascript
+{
+    "message": "",
+        "status": 0,
+        "data": {
+        "engineList": [
+            {
+                "id": -94209540.07806732,
+                "requestResource": "consectetur dolor eiusmod ipsum",
+                "releasedResource": "est in id Ut",
+                "usedTimes": -51038117.02855969,
+                "ticketId": "adipisicing in nostrud do",
+                "ecmInstance": "id magna Lorem eiusmod",
+                "engineType": "dolor",
+                "usedTime": -38764910.74278392,
+                "logDirSuffix": "sunt eiusmod aute et",
+                "releaseTime": -33417043.232267484,
+                "usedResource": "in amet veniam velit",
+                "requestTimes": -15643696.319572791,
+                "labelValue": "veniam incididunt magna",
+                "releaseTimes": 96384811.3484546,
+                "createTime": 38434279.49900183,
+                "serviceInstance": "consequat aliqua in enim",
+                "createUser": "Lorem Ut occaecat amet"
+            },
+            {
+                "labelValue": "adipisicing deserunt do",
+                "usedTimes": 49828407.223826766,
+                "usedResource": "mollit laboris cupidatat enim",
+                "releaseTimes": -73400915.22672182,
+                "releasedResource": "est qui id ipsum mollit",
+                "requestResource": "deserunt reprehenderit ut",
+                "serviceInstance": "do laborum",
+                "requestTimes": -33074164.700212136,
+                "ecmInstance": "dolore",
+                "logDirSuffix": "ea incididunt",
+                "createUser": "Ut exercitation officia dolore ipsum",
+                "usedTime": 25412456.522457644,
+                "createTime": -93227549.70578489,
+                "id": -84032815.0589972,
+                "ticketId": "eu in mollit do",
+                "engineType": "non Ut eu",
+                "releaseTime": 34923394.9602966
+            },
+            {
+                "releaseTime": -91057731.93164417,
+                "usedTime": 99226623.97199354,
+                "id": 59680041.603043556,
+                "requestResource": "officia Ut enim nulla",
+                "usedTimes": -14680356.219609797,
+                "logDirSuffix": "proident amet reprehenderit tempor",
+                "ticketId": "minim esse",
+                "releaseTimes": 37270921.94107443,
+                "serviceInstance": "enim adipisicing cupidatat",
+                "createUser": "culpa",
+                "requestTimes": -33504917.797325186,
+                "releasedResource": "et dolore quis",
+                "ecmInstance": "elit dolor adipisicing id",
+                "createTime": -38827280.78902944,
+                "engineType": "ullamco in eiusmod reprehenderit aute",
+                "labelValue": "dolore qui labore nulla laboris",
+                "usedResource": "irure sint nostrud Excepteur sunt"
+            },
+            {
+                "requestResource": "deserunt incididunt enim",
+                "releaseTimes": -16708903.732444778,
+                "id": 80588551.12419662,
+                "ecmInstance": "et veniam",
+                "releaseTime": -50240956.63233949,
+                "usedTimes": -5348294.728038415,
+                "labelValue": "incididunt tempor reprehenderit quis eu",
+                "createUser": "in in",
+                "serviceInstance": "minim sit",
+                "ticketId": "in dolore",
+                "usedTime": -30138563.761232898,
+                "logDirSuffix": "quis laborum ea",
+                "createTime": 65920455.93896958,
+                "requestTimes": 38810152.0160971,
+                "engineType": "est in Ut proident",
+                "usedResource": "nulla laboris Ut",
+                "releasedResource": "cupidatat irure"
+            },
+            {
+                "usedResource": "Lorem adipisicing dolor",
+                "createTime": -11906770.11266476,
+                "labelValue": "et id magna",
+                "releaseTimes": 32546901.20497243,
+                "id": -90442428.4679744,
+                "logDirSuffix": "aute ut eu commodo",
+                "ticketId": "cillum sint non deserunt",
+                "requestResource": "non velit sunt consequat culpa",
+                "requestTimes": -75282512.0022062,
+                "usedTime": 6378131.554130778,
+                "releasedResource": "Duis in",
+                "serviceInstance": "dolore ut officia",
+                "usedTimes": 70810503.51038182,
+                "createUser": "voluptate sed",
+                "ecmInstance": "laboris do sit dolore ipsum",
+                "engineType": "id",
+                "releaseTime": 37544575.30154848
+            }
+        ]
+    }
+}
+```
\ No newline at end of file
diff --git a/docs/architecture/computation-governance-services/job-submission-preparation-and-execution-process.md b/docs/architecture/computation-governance-services/job-submission-preparation-and-execution-process.md
index b0bfecf4d5..7fdd3f61e8 100644
--- a/docs/architecture/computation-governance-services/job-submission-preparation-and-execution-process.md
+++ b/docs/architecture/computation-governance-services/job-submission-preparation-and-execution-process.md
@@ -3,144 +3,179 @@ title: Job Submission
 sidebar_position: 2
 ---
 
-# Job submission, preparation and execution process
-
-The submission and execution of computing tasks (Job) is the core capability provided by Linkis. It almost colludes with all modules in the Linkis computing governance architecture and occupies a core position in Linkis.
-
-The whole process, starting at submitting user's computing tasks from the client and ending with returning final results, is divided into three stages: submission -> preparation -> executing. The details are shown in the following figure.
-
-![The overall flow chart of computing tasks](/Images/Architecture/Job_submission_preparation_and_execution_process/overall.png)
-
-Among them:
-
-- Entrance, as the entrance to the submission stage, provides task reception, scheduling and job information forwarding capabilities. It is the unified entrance for all computing tasks. It will forward computing tasks to Orchestrator for scheduling and execution.
-- Orchestrator, as the entrance to the preparation phase, mainly provides job analysis, orchestration and execution capabilities.
-- Linkis Manager: The management center of computing governance capabilities. Its main responsibilities are as follows:
-
-  1. ResourceManager:Not only has the resource management capabilities of Yarn and Linkis EngineConnManager, but also provides tag-based multi-level resource allocation and recovery capabilities, allowing ResourceManager to have full resource management capabilities across clusters and across computing resource types;
-  2. AppManager:  Coordinate and manage all EngineConnManager and EngineConn, including the life cycle of EngineConn application, reuse, creation, switching, and destruction to AppManager for management;
-  3. LabelManager: Based on multi-level combined labels, it will provide label support for the routing and management capabilities of EngineConn and EngineConnManager across IDC and across clusters;
-  4. EngineConnPluginServer: Externally provides the resource generation capabilities required to start an EngineConn and EngineConn startup command generation capabilities.
-- EngineConnManager: It is the manager of EngineConn, which provides engine life-cycle management, and at the same time reports load information and its own health status to RM.
-- EngineConn: It is the actual connector between Linkis and the underlying computing storage engines. All user computing and storage tasks will eventually be submitted to the underlying computing storage engine by EngineConn. According to different user scenarios, EngineConn provides full-stack computing capability framework support for interactive computing, streaming computing, off-line computing, and data storage tasks.
-
-## 1. Submission Stage
-
-The submission phase is mainly the interaction of Client -> Linkis Gateway -> Entrance, and the process is as follows:
-
-![Flow chart of submission phase](/Images/Architecture/Job_submission_preparation_and_execution_process/submission.png)
-
-1. First, the Client (such as the front end or the client) initiates a Job request, and the job request information is simplified as follows (for the specific usage of Linkis, please refer to [How to use Linkis](../../user-guide/how-to-use.md)):
-```
-POST /api/rest_j/v1/entrance/submit
-```
-
-```json
-{
-     "executionContent": {"code": "show tables", "runType": "sql"},
-     "params": {"variable": {}, "configuration": {}}, //not required
-     "source": {"scriptPath": "file:///1.hql"}, //not required, only used to record code source
-     "labels": {
-         "engineType": "spark-2.4.3", //Specify engine
-         "userCreator": "username-IDE" // Specify the submission user and submission system
-     }
+> Linkis task execution is the core function of Linkis. It calls to Linkis's computing governance service, public enhancement service, and three-tier services of microservice governance. Now it supports the execution of tasks of OLAP, OLTP, Streaming and other engine types. This article will discuss OLAP The process of task submission, preparation, execution, and result return of the type engine is introduced.
+
+## Keywords:
+LinkisMaster: The management service in the computing governance service layer of Linkis mainly includes several management and control services such as AppManager, ResourceManager, and LabelManager. Formerly known as LinkisManager service
+Entrance: The entry service in the computing governance service layer, which completes the functions of task scheduling, status control, task information push, etc.
+Orchestrator: Linkis' orchestration service provides powerful orchestration and computing strategy capabilities to meet the needs of multiple application scenarios such as multi-active, active-standby, transaction, replay, current limiting, heterogeneous and mixed computing. At this stage, Orchestrator is relied on by the Entrance service
+EngineConn (EC): Engine connector, responsible for accepting tasks and submitting them to underlying engines such as Spark, hive, Flink, Presto, trino, etc. for execution
+EngineConnManager (ECM): Linkis' EC process management service, responsible for controlling the life cycle of EngineConn (start, stop)
+LinkisEnginePluginServer: This service is responsible for managing the startup materials and configuration of each engine, and also provides the startup command acquisition of each EngineConn, as well as the resources required by each EngineConn
+PublicEnhencementService (PES): A public enhancement service, a module that provides functions such as unified configuration management, context service, material library, data source management, microservice management, and historical task query for other microservice modules
+
+## 1. Linkis interactive task execution architecture
+### 1.1, Task execution thinking
+&nbsp;&nbsp;&nbsp;&nbsp;Before the existing Linkis 1.0 task execution architecture, it has undergone many evolutions. From the very beginning, various FullGC caused the service to crash when there were many users, to how the scripts developed by users support multi-platform , multi-tenancy, strong control, high concurrent operation, we encountered the following problems:
+1. How to support tens of thousands of concurrent tenants and isolate each other?
+2. How to support context unification, user-defined UDFs, custom variables, etc. to support the use of multiple systems?
+3. How to support high availability so that the tasks submitted by users can run normally?
+4. How to support the underlying engine log, progress, and status of the task to be pushed to the front end in real time?
+5. How to support multiple types of tasks to submit sql, python, shell, scala, java, etc.
+
+### 1.2, Linkis task execution design
+&nbsp;&nbsp;&nbsp;&nbsp;Based on the above five questions, Linkis divides the OLTP task into four stages, which are:
+1. Submission stage: The APP is submitted to the CG-Entrance service of Linkis to the completion of the persistence of the task (PS-JobHistory) and various interceptor processing of the task (dangerous syntax, variable substitution, parameter checking) and other steps, and become a producer Consumer concurrency control;
+2. Preparation stage: The task is scheduled by the Scheduler in Entrance to the Orchestrator module for task arrangement, and completes the EngineConn application to the LinkisMaster. During this process, the tenant's resources will be managed and controlled;
+3. Execution stage: The task is submitted from Orchestrator to EngineConn for execution, and EngineConn specifically submits the underlying engine for execution, and pushes the task information to the caller in real time;
+4. Result return stage: return results to the caller, support json and io streams to return result sets
+   The overall task execution architecture of Linkis is shown in the following figure:
+   ![arc](/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_arc.png)
+
+## 2. Introduction to the task execution process
+&nbsp;&nbsp;&nbsp;&nbsp;First of all, let's give a brief introduction to the processing flow of OLAP tasks. An overall execution flow of the task is shown in the following figure:
+![flow](/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_flow.png)
+
+&nbsp;&nbsp;&nbsp;&nbsp;The whole task involves all the services of all computing governance. After the task is forwarded to Linkis's population service Entrance through the Gateway, it will perform multi-level scheduling (producer-consumer mode) through the label of the task. The FIFO mode completes task scheduling and execution. Entrance then submits the task to Orchestrator for task scheduling and submission. Orchestrator will complete the EC application to LinkisMaster. During this p [...]
+
+
+### 2.1 Job submission stage
+&nbsp;&nbsp;&nbsp;&nbsp;Job submission phase Linkis supports multiple types of tasks: SQL, Python, Shell, Scala, Java, etc., supports different submission interfaces, and supports Restful/JDBC/Python/Shell and other submission interfaces. Submitting tasks mainly includes task code, labels, parameters and other information. The following is an example of RestFul:
+Initiate a Spark Sql task through the Restfu interface
+````JSON
+"method": "/api/rest_j/v1/entrance/submit",
+"data": {
+  "executionContent": {
+    "code": "select * from table01",
+    "runType": "sql"
+  },
+  "params": {
+    "variable": {// task variable
+      "testvar": "hello"
+    },
+    "configuration": {
+      "runtime": {// task runtime params
+        "jdbc.url": "XX"
+      },
+      "startup": { // ec start up params
+        "spark.executor.cores": "4"
+      }
+    }
+  },
+  "source": { //task source information
+    "scriptPath": "file:///tmp/hadoop/test.sql"
+  },
+  "labels": {
+    "engineType": "spark-2.4.3",
+    "userCreator": "hadoop-IDE"
+  }
 }
-```
-
-2. After Linkis-Gateway receives the request, according to the serviceName in the URI ``/api/rest_j/v1/${serviceName}/.+``, it will confirm the microservice name for routing and forwarding. Here Linkis-Gateway will parse out the  name as entrance and  Job is forwarded to the Entrance microservice. It should be noted that if the user specifies a routing label, the Entrance microservice instance with the corresponding label will be selected for forwarding according to the routing label ins [...]
-3. After Entrance receives the Job request, it will first simply verify the legitimacy of the request, then use RPC to call JobHistory to persist the job information, and then encapsulate the Job request as a computing task, put it in the scheduling queue, and wait for it to be consumed by consumption thread.
-4. The scheduling queue will open up a consumption queue and a consumption thread for each group. The consumption queue is used to store the user computing tasks that have been preliminarily encapsulated. The consumption thread will continue to take computing tasks from the consumption queue for consumption in a FIFO manner. The current default grouping method is Creator + User (that is, submission system + user). Therefore, even if it is the same user, as long as it is a computing task  [...]
-5. After the consuming thread takes out the calculation task, it will submit the calculation task to Orchestrator, which officially enters the preparation phase.
-
-## 2. Preparation Stage
-
-There are two main processes in the preparation phase. One is to apply for an available EngineConn from LinkisManager to submit and execute the following computing tasks. The other is Orchestrator to orchestrate the computing tasks submitted by Entrance, and to convert a user's computing request into a physical execution tree and handed over to the execution phase where a computing task actually being executed. 
-
-#### 2.1 Apply to LinkisManager for available EngineConn
-
-If the user has a reusable EngineConn in LinkisManager, the EngineConn is directly locked and returned to Orchestrator, and the entire application process ends.
-
-How to define a reusable EngineConn? It refers to those that can match all the label requirements of the computing task, and the EngineConn's own health status is Healthy (the load is low and the actual status is Idle). Then, all the EngineConn that meets the conditions are sorted and selected according to the rules, and finally the best one is locked.
-
-If the user does not have a reusable EngineConn, a process to request a new EngineConn will be triggered at this time. Regarding the process, please refer to: [How to add an EngineConn](engine/add-an-engine-conn.md).
-
-#### 2.2 Orchestrate a computing task
-
-Orchestrator is mainly responsible for arranging a computing task (JobReq) into a physical execution tree (PhysicalTree) that can be actually executed, and providing the execution capabilities of the Physical tree.
-
-Here we first focus on Orchestrator's computing task scheduling capabilities. A flow chart is shown below:
-
-![Orchestration flow chart](/Images/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png)
-
-The main process is as follows:
-
-- Converter: Complete the conversion of the JobReq (task request) submitted by the user to Orchestrator's ASTJob. This step will perform parameter check and information supplementation on the calculation task submitted by the user, such as variable replacement, etc.
-- Parser: Complete the analysis of ASTJob. Split ASTJob into an AST tree composed of ASTJob and ASTStage.
-- Validator: Complete the inspection and information supplement of ASTJob and ASTStage, such as code inspection, necessary Label information supplement, etc.
-- Planner: Convert an AST tree into a Logical tree. The Logical tree at this time has been composed of LogicalTask, which contains all the execution logic of the entire computing task.
-- Optimizer: Convert a Logical tree to a Physical tree and optimize the Physical tree.
-
-In a physical tree, the majority of nodes are computing strategy logic. Only the middle ExecTask truly encapsulates the execution logic which will be further submitted to and executed at EngineConn. As shown below:
-
-![Physical Tree](/Images/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png)
-
-Different computing strategies have different execution logics encapsulated by JobExecTask and StageExecTask in the Physical tree.
-
-The execution logic encapsulated by JobExecTask and StageExecTask in the Physical tree depends on the  specific type of computing strategy.
-
-For example, under the multi-active computing strategy, for a computing task submitted by a user, the execution logic submitted to EngineConn of different clusters for execution is encapsulated in two ExecTasks, and the related strategy logic is reflected in the parent node (StageExecTask(End)) of the two ExecTasks.
-
-Here, we take the multi-reading scenario under the multi-active computing strategy as an example.
-
-In multi-reading scenario, only one result of ExecTask is required to return. Once the result is returned , the Physical tree can be marked as successful. However, the Physical tree only has the ability to execute sequentially according to dependencies, and cannot terminate the execution of each node. Once a node is canceled or fails to execute, the entire Physical tree will be marked as failure. At this time, StageExecTask (End) is needed to ensure that the Physical tree can not only ca [...]
-
-The orchestration process of Linkis Orchestrator is similar to many SQL parsing engines (such as Spark, Hive's SQL parser). But in fact, the orchestration capability of Linkis Orchestrator is realized based on the computing governance field for the different computing governance needs of users. The SQL parsing engine is a parsing orchestration oriented to the SQL language. Here is a simple distinction:
-
-1. What Linkis Orchestrator mainly wants to solve is the orchestration requirements caused by different computing tasks for computing strategies. For example, in order to be multi-active, Orchestrator will submit a calculation task for the user, based on the "multi-active" computing strategy requirements, compile a physical tree, so as to submit to multiple clusters to perform this calculation task. And in the process of constructing the entire Physical tree, various possible abnormal sc [...]
-2. The orchestration ability of Linkis Orchestrator has nothing to do with the programming language. In theory, as long as an engine has adapted to Linkis, all the programming languages it supports can be orchestrated, while the SQL parsing engine only cares about the analysis and execution of SQL, and is only responsible for parsing a piece of SQL into one executable Physical tree, and finally calculate the result.
-3. Linkis Orchestrator also has the ability to parse SQL, but SQL parsing is just one of Orchestrator Parser's analytic implementations for the SQL programming language. The Parser of Linkis Orchestrator also considers introducing Apache Calcite to parse SQL. It supports splitting a user SQL that spans multiple computing engines (must be a computing engine that Linkis has docked) into multiple sub SQLs and submitting them to each corresponding engine during the execution phase. Finally,  [...]
-
-<!--
-#todo  Orchestrator documentation is not ready yet 
-Please refer to [Orchestrator Architecture Design]() for more details. 
--->
-
-After the analysis and arrangement of Linkis Orchestrator, the  computing task has been transformed into a executable physical tree. Orchestrator will submit the Physical tree to Orchestrator's Execution module and enter the final execution stage.
-
-## 3. Execution Stage
-
-The execution stage is mainly divided into the following two steps, these two steps are the last two phases of capabilities provided by Linkis Orchestrator:
-
-![Flow chart of the execution stage](/Images/Architecture/Job_submission_preparation_and_execution_process/execution.png)
-
-The main process is as follows:
-
-- Execution: Analyze the dependencies of the Physical tree, and execute them sequentially from the leaf nodes according to the dependencies.
-- Reheater: Once the execution of a node in the Physical tree is completed, it will trigger a reheat. Reheating allows the physical tree to be dynamically adjusted according to the real-time execution.For example: it is detected that a leaf node fails to execute, and it supports retry (if it is caused by throwing ReTryExecption), the Physical tree will be automatically adjusted, and a retry parent node with exactly the same content is added to the leaf node .
-
-Let us go back to the Execution stage, where we focus on the execution logic of the ExecTask node that encapsulates the user computing task submitted to EngineConn.
-
-1. As mentioned earlier, the first step in the preparation phase is to obtain a usable EngineConn from LinkisManager. After ExecTask gets this EngineConn, it will submit the user's computing task to EngineConn through an RPC request.
-2. After EngineConn receives the computing task, it will asynchronously submit it to the underlying computing storage engine through the thread pool, and then immediately return an execution ID.
-3. After ExecTask gets this execution ID, it can then use the ID to asynchronously pull the execution status of the computing task (such as: status, progress, log, result set, etc.).
-4. At the same time, EngineConn will monitor the execution of the underlying computing storage engine in real time through multiple registered Listeners. If the computing storage engine does not support registering Listeners, EngineConn will start a daemon thread for the computing task and periodically pull the execution status from the computing storage engine.
-5. EngineConn will pull the execution status back to the microservice where Orchestrator is located in real time through RCP request.
-6. After the Receiver of the microservice receives the execution status, it will broadcast it through the ListenerBus, and the Orchestrator Execution will consume the event and dynamically update the execution status of the Physical tree.
-7. The result set generated by the calculation task will be written to storage media such as HDFS at the EngineConn side. EngineConn returns only the result set path through RPC, Execution consumes the event, and broadcasts the obtained result set path through ListenerBus, so that the Listener registered by Entrance with Orchestrator can consume the result set path and write the result set path Persist to JobHistory.
-8. After the execution of the computing task on the EngineConn side is completed, through the same logic, the Execution will be triggered to update the state of the ExecTask node of the Physical tree, so that the Physical tree will continue to execute until the entire tree is completely executed. At this time, Execution will broadcast the completion status of the calculation task through ListenerBus.
-9. After the Entrance registered Listener with the Orchestrator consumes the state event, it updates the job state to JobHistory, and the entire task execution is completed.
-
-----
-
-Finally, let's take a look at how the client side knows the state of the calculation task and obtains the calculation result in time, as shown in the following figure:
-
-![Results acquisition process](/Images/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png)
-
-The specific process is as follows:
-
-1. The client periodically polls to request Entrance to obtain the status of the computing task.
-2. Once the status is flipped to success, it sends a request for job information to JobHistory, and gets all the result set paths.
-3. Initiate a query file content request to PublicService through the result set path, and obtain the content of the result set.
-
-Since then, the entire process of  job submission -> preparation -> execution have been completed.
-
+````
+1. The task will first be submitted to Linkis's gateway linkis-mg-gateway service. Gateway will forward it to the corresponding Entrance service according to whether the task has a routeLabel. If there is no RouteLabel, it will be forwarded to an Entrance service randomly.
+2. After Entrance receives the corresponding job, it will call the RPC of the JobHistory module in the PES to persist the job information, and parse the parameters and code to replace the custom variables, and submit them to the scheduler (default FIFO scheduling) ) The scheduler will group tasks by tags, and tasks with different tags do not affect scheduling.
+3. After Entrance is consumed by the FIFO scheduler, it will be submitted to the Orchestrator for orchestration and execution, and the submission phase of the task is completed.
+   A brief description of the main classes involved:
+````
+EntranceRestfulApi: Controller class of entry service, operations such as task submission, status, log, result, job information, task kill, etc.
+EntranceServer: task submission entry, complete task persistence, task interception analysis (EntranceInterceptors), and submit to the scheduler
+EntranceContext: Entrance's context holding class, including methods for obtaining scheduler, task parsing interceptor, logManager, persistence, listenBus, etc.
+FIFOScheduler: FIFO scheduler for scheduling tasks
+EntranceExecutor: The scheduled executor, after the task is scheduled, it will be submitted to the EntranceExecutor for execution
+EntranceJob: The job task scheduled by the scheduler, and the JobRequest submitted by the user is parsed through the EntranceParser to generate a one-to-one correspondence with the JobRequest
+````
+The task status is now queued
+
+### 2.2 Job preparation stage
+&nbsp;&nbsp;&nbsp;&nbsp;Entrance's scheduler will generate different consumers to consume tasks according to the Label in the Job. When the task is consumed and modified to Running, it will enter the preparation state, and the task will be prepared after the corresponding task. Phase begins. It mainly involves the following services: Entrance, LinkisMaster, EnginepluginServer, EngineConnManager, and EngineConn. The following services will be introduced separately.
+### 2.2.1 Entrance steps:
+1. The consumer (FIFOUserConsumer) consumes the supported concurrent number configured by the corresponding tag, and schedules the task consumption to the Orchestrator for execution
+2. First, Orchestrator arranges the submitted tasks. For ordinary hive and Spark single-engine tasks, it is mainly task parsing, label checking and verification. For multi-data source mixed computing scenarios, different tasks will be split and submitted to Different data sources for execution, etc.
+3. In the preparation phase, another important thing for the Orchestrator is to request the LinkisMaster to obtain the EngineConn for executing the task. If LinkisMaster has a corresponding EngineConn that can be reused, it will return directly, if not, create an EngineConn.
+4. Orchestrator gets the task and submits it to EngineConn for execution. The preparation phase ends and the job execution phase is entered.
+   A brief description of the main classes involved:
+
+````
+## Entrance
+FIFOUserConsumer: The consumer of the scheduler, which will generate different consumers according to the tags, such as IDE-hadoop and spark-2.4.3. Consume submitted tasks. And control the number of tasks running at the same time, configure the number of concurrency through the corresponding tag: wds.linkis.rm.instance
+DefaultEntranceExecutor: The entry point for task execution, which initiates a call to the orchestrator: callExecute
+JobReq: The task object accepted by the scheduler, converted from EntranceJob, mainly including code, label information, parameters, etc.
+OrchestratorSession: Similar to SparkSession, it is the entry point of the orchestrator. Normal singleton.
+Orchestration: The return object of the JobReq orchestrated by the OrchestratorSession, which supports execution and printing of execution plans, etc.
+OrchestrationFuture: Orchestration selects the return of asynchronous execution, including common methods such as cancel, waitForCompleted, and getResponse
+Operation: An interface used to extend operation tasks. Now LogOperation for obtaining logs and ProgressOperation for obtaining progress have been implemented.
+
+## Orchestrator
+CodeLogicalUnitExecTask: The execution entry of code type tasks. After the task is finally scheduled and run, the execute method of this class will be called. First, it will request EngineConn from LinkisMaster and then submit for execution.
+DefaultCodeExecTaskExecutorManager: EngineConn responsible for managing code types, including requesting and releasing EngineConn
+ComputationEngineConnManager: Responsible for LinkisMaster to connect, request and release ENgineConn
+````
+
+### 2.2.2 LinkisMaster steps:
+
+1. LinkisMaster receives the request EngineConn request from the Entrance service for processing
+2. Determine if there is an EngineConn that can be reused by the corresponding Label, and return directly if there is
+3. If not, enter the process of creating EngineConn:
+- First select the appropriate EngineConnManager service through Label.
+- Then get the resource type and resource usage of this request EngineConn by calling EnginePluginServer,
+- According to the resource type and resource, determine whether the corresponding Label still has resources, if so, enter the creation, otherwise throw a retry exception
+- Request the EngineConnManager of the first step to start EngineConn
+- Wait for the EngineConn to be idle, return the created EngineConn, otherwise judge whether the exception can be retried
+
+4. Lock the created EngineConn and return it to Entrance. Note that it will receive the corresponding request ID after sending the EC request for the asynchronous request Entrance. After the LinkisMaster request is completed, it will actively pass the corresponding Entrance service.
+
+A brief description of the main classes involved:
+````
+## LinkisMaster
+EngineAskEngineService: LinkisMaster is responsible for processing the engine request processing class. The main logic judges whether there is an EngineConn that can be reused by calling EngineReuseService, otherwise calling EngineCreateService to create an EngineConn
+EngineCreateService: Responsible for creating EngineConn, the main steps are:
+
+
+##LinkisEnginePluginServer
+EngineConnLaunchService: Provides ECM to obtain the startup information of the corresponding engine type EngineConn
+EngineConnResourceFactoryService: Provided to LinkisMaster to obtain the resources needed to start EngineConn corresponding to this task
+EngineConnResourceService: Responsible for managing engine materials, including refreshing and refreshing all
+
+## EngineConnManager
+AbstractEngineConnLaunchService: Responsible for starting the request to start the EngineConn by accepting the LinkisMaster request, and completing the start of the EngineConn engine
+ECMHook: It is used to process the pre and post operations before and after EngineConn is started. For example, hive UDF Jar is added to the classPath started by EngineConn.
+````
+
+
+It should be noted here that if the user has an available idle engine, the four steps 1, 2, 3, and 4 will be skipped;
+
+### 2.3 Job execution phase
+&nbsp;&nbsp;&nbsp;&nbsp;When the orchestrator in the Entrance service gets the EngineConn, it enters the execution phase. CodeLogicalUnitExecTask will submit the task to the EngineConn for execution, and the EngineConn will create different executors through the corresponding CodeLanguageLabel for execution. The main steps are as follows:
+1. CodeLogicalUnitExecTask submits tasks to EngineConn via RPC
+2. EngineConn determines whether there is a corresponding CodeLanguageLabel executor, if not, create it
+3. Submit to Executor for execution, and execute by linking to the specific underlying engine execution, such as Spark submitting sql, pyspark, and scala tasks through sparkSession
+4. The task status flow is pushed to the Entrance service in real time
+5. By implementing log4jAppender, SendAppender pushes logs to Entrance service via RPC
+6. Push task progress and resource information to Entrance in real time through timed tasks
+
+A brief description of the main classes involved:
+````
+ComputationTaskExecutionReceiver: The service class used by the Entrance server orchestrator to receive all RPC requests from EngineConn, responsible for receiving progress, logs, status, and result sets pushed to the last caller through the ListenerBus mode
+TaskExecutionServiceImpl: The service class for EngineConn to receive all RPC requests from Entrance, including task execution, status query, task Kill, etc.
+ComputationExecutor: specific task execution parent class, such as Spark is divided into SQL/Python/Scala Executor
+ComputationExecutorHook: Hook before and after Executor creation, such as initializing UDF, executing default UseDB, etc.
+EngineConnSyncListener: ResultSetListener/TaskProgressListener/TaskStatusListener is used to monitor the progress, result set, and progress of the Executor during the execution of the task.
+SendAppender: Responsible for pushing logs from EngineConn to Entrance
+````
+### 2.4 Job result push stage
+&nbsp;&nbsp;&nbsp;&nbsp;This stage is relatively simple and is mainly used to return the result set generated by the task in EngineConn to the Client. The main steps are as follows:
+1. First, when EngineConn executes the task, the result set will be written, and the corresponding path will be obtained by writing to the file system. Of course, memory cache is also supported, and files are written by default.
+2. EngineConn returns the corresponding result set path and the number of result sets to Entrance
+3. Entrance calls JobHistory to update the result set path information to the task table
+4. Client obtains the result set path through task information and reads the result set
+   A brief description of the main classes involved:
+````
+EngineExecutionContext: responsible for creating the result set and pushing the result set to the Entrance service
+ResultSetWriter: Responsible for writing result sets to filesystems that support linkis-storage support, and now supports both local and HDFS. Supported result set types, table, text, HTML, image, etc.
+JobHistory: Stores all the information of the task, including status, result path, indicator information, etc. corresponding to the entity class in the DB
+ResultSetReader: The key class for reading the result set
+````
+
+## 3. Summary
+&nbsp;&nbsp;&nbsp;&nbsp;Above we mainly introduced the entire execution process of the OLAP task of the Linkis Computing Governance Service Group CGS. According to the processing process of the task request, the task is divided into four parts: submit, prepare, execute, and return the result stage. CGS is mainly designed and implemented according to these 4 stages, serves these 4 stages, and provides powerful and flexible capabilities for each stage. In the submission stage, it mainly pr [...]
+
+![time](/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_time.png)
\ No newline at end of file
diff --git a/docs/architecture/computation-governance-services/linkis-manager/app-manager.md b/docs/architecture/computation-governance-services/linkis-manager/app-manager.md
index b14fa7fe86..04040528e0 100644
--- a/docs/architecture/computation-governance-services/linkis-manager/app-manager.md
+++ b/docs/architecture/computation-governance-services/linkis-manager/app-manager.md
@@ -1,6 +1,6 @@
 ---
 title: App Manager
-sidebar_position: 3
+sidebar_position: 1
 ---
 
 ## 1. Background
diff --git a/docs/architecture/computation-governance-services/linkis-manager/ec-history-arc.md b/docs/architecture/computation-governance-services/linkis-manager/ec-history-arc.md
new file mode 100644
index 0000000000..c7c11a3e0f
--- /dev/null
+++ b/docs/architecture/computation-governance-services/linkis-manager/ec-history-arc.md
@@ -0,0 +1,79 @@
+---
+title: EC History List Architecture Design
+sidebar_position: 4
+---
+
+## 1. General
+### Requirements Background
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Now LinkisManager only records the information and resource usage of the running EengineConn, but this information is lost after the task ends. It is inconvenient to do some statistics and view of historical ECs, or to view logs of ECs that have ended. It is more important to record the historical EC.
+### Target
+1. Complete the persistence of EC information and resource information to DB storage
+2. Support the viewing and searching of historical EC information through restful
+3. Support to view the logs of EC that has ended
+
+## 2. Design
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The main modification of this feature is the RM and AM modules under LinkisManager, and a new information record table is added. It will be described in detail below.
+
+### 2.1 Technical Architecture
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Because this implementation needs to record EC information and resource information, and resource information is divided into three concepts, such as requesting resources, actual use of resources, and release of resources, and all need to be recorded. Therefore, this implementation is based on the life cycle of the EC in the ResourceManager. When the EC completes the above three stages, the update operation of the EC information is added. The whole is shown  [...]
+![arc](/Images/Architecture/LinkisManager/ecHistoryArc.png)
+
+### 2.2 Business Architecture
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;This feature is mainly to complete the information recording of historical EC and support the log viewing of historical technical EC. The modules designed by the function point are as follows:
+
+| Component name | First-level module | Second-level module | Function point |
+|---|---|---|---|
+| Linkis | LinkisManager | ResourceManager| Complete the EC information record when the EC requests resources, reports the use of resources, and releases resources|
+| Linkis | LinkisManager | AppManager| Provides an interface to list and search all historical EC information|
+
+## 3. Module Design
+### Core execution flow
+[Input] The input is mainly the information input when creating the engine, requesting resources, reporting the actual use of resources after the engine starts, and releasing resources when the engine exits, mainly including the requested label, resource, ec unique ticketid, and resource type.
+[Processing process] The information recording service processes the input data, and parses the corresponding engine information, user, creator, and log path through tags. Confirm the resource request, use, and release by the resource type. Then talk about the information stored in the DB.
+The call sequence diagram is as follows:
+![Time](/Images/Architecture/LinkisManager/ecHistoryTime.png)
+
+
+## 4. DDL:
+```sql
+# EC information resource record table
+DROP TABLE IF EXISTS `linkis_cg_ec_resource_info_record`;
+CREATE TABLE `linkis_cg_ec_resource_info_record` (
+    `id` INT(20) NOT NULL AUTO_INCREMENT,
+    `label_value` VARCHAR(255) NOT NULL COMMENT 'ec labels stringValue',
+    `create_user` VARCHAR(128) NOT NULL COMMENT 'ec create user',
+    `service_instance` varchar(128) COLLATE utf8_bin DEFAULT NULL COMMENT 'ec instance info',
+    `ecm_instance` varchar(128) COLLATE utf8_bin DEFAULT NULL COMMENT 'ecm instance info ',
+    `ticket_id` VARCHAR(100) NOT NULL COMMENT 'ec ticket id',
+    `log_dir_suffix` varchar(128) COLLATE utf8_bin DEFAULT NULL COMMENT 'log path',
+    `request_times` INT(8) COMMENT 'resource request times',
+    `request_resource` VARCHAR(255) COMMENT 'request resource',
+    `used_times` INT(8) COMMENT 'resource used times',
+    `used_resource` VARCHAR(255) COMMENT 'used resource',
+    `release_times` INT(8) COMMENT 'resource released times',
+    `released_resource` VARCHAR(255) COMMENT 'released resource',
+    `release_time` datetime DEFAULT NULL COMMENT 'released time',
+    `used_time` datetime DEFAULT NULL COMMENT 'used time',
+    `create_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT 'create time',
+    PRIMARY KEY (`id`),
+    KEY (`ticket_id`),
+    UNIQUE KEY `label_value_ticket_id` (`ticket_id`, `label_value`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
+````
+## 5. Interface design:
+The API interface of the engine history management page, refer to the document Management console to add a history engine page
+https://linkis.apache.org/docs/latest/api/http/linkis-cg-linkismanager-api/ec-resource-management-api
+
+## 6. Non-functional design:
+
+### 6.1 Security
+No security issues are involved, restful requires login authentication
+
+### 6.2 Performance
+Less impact on engine life cycle performance
+
+### 6.3 Capacity
+Requires regular cleaning
+
+### 6.4 High Availability
+not involving
\ No newline at end of file
diff --git a/docs/architecture/computation-governance-services/linkis-manager/resource-manager.md b/docs/architecture/computation-governance-services/linkis-manager/resource-manager.md
index aee5660902..df3ea73a8d 100644
--- a/docs/architecture/computation-governance-services/linkis-manager/resource-manager.md
+++ b/docs/architecture/computation-governance-services/linkis-manager/resource-manager.md
@@ -1,6 +1,6 @@
 ---
 title: Resource Manager
-sidebar_position: 3
+sidebar_position: 2
 ---
 
 ## 1. Background
diff --git a/docs/architecture/microservice-governance-services/gateway.md b/docs/architecture/microservice-governance-services/gateway.md
index edb97e136c..f5efacc09d 100644
--- a/docs/architecture/microservice-governance-services/gateway.md
+++ b/docs/architecture/microservice-governance-services/gateway.md
@@ -1,6 +1,6 @@
 ---
 title: Gateway Design
-sidebar_position: 3
+sidebar_position: 1
 ---
 
 ## Gateway Architecture Design
diff --git a/docs/architecture/microservice-governance-services/overview.md b/docs/architecture/microservice-governance-services/overview.md
index 783effb30c..70511115a6 100644
--- a/docs/architecture/microservice-governance-services/overview.md
+++ b/docs/architecture/microservice-governance-services/overview.md
@@ -1,6 +1,6 @@
 ---
 title: Overview
-sidebar_position: 1
+sidebar_position: 0
 ---
 
 ## **Background**
diff --git a/docs/architecture/microservice-governance-services/service_isolation.md b/docs/architecture/microservice-governance-services/service_isolation.md
new file mode 100644
index 0000000000..418dd200c6
--- /dev/null
+++ b/docs/architecture/microservice-governance-services/service_isolation.md
@@ -0,0 +1,197 @@
+---
+title: Service isolation Design
+sidebar_position: 2
+---
+
+## 1. General
+### Requirements Background
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis now performs load balancing based on the ribbon when it forwards services in the Gateway, but in some cases, there are some important business tasks that want to achieve service level isolation, if the service is based on the ribbon There will be problems in equilibrium. For example, tenant A wants his tasks to be routed to a specific Linkis-CG-Entrance service, so that when other instances are abnormal, the Entrance of service A will not be affected.
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;In addition, tenants and isolation of support services can also quickly isolate an abnormal service and support scenarios such as grayscale upgrades.
+
+### Target
+1. Support forwarding the service according to the routing label by parsing the label of the request
+2. Tag Registration and Modification of Support Services
+
+## 2. Design
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;This feature adds two modules, linkis-mg-gateway and instance-label, which are mainly modified points, designed to add the forwarding logic of Gateway, and instance-label to support services and labels register.
+
+### 2.1 Technical Architecture
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The overall technical architecture mainly modifies the point. The RestFul request needs to carry label parameter information such as routing label, and then the corresponding label will be parsed when the Gateway forwards to complete the route forwarding of the interface. The whole is shown in the figure below
+![arc](/Images/Architecture/Gateway/service_isolation_arc.png)
+
+A few notes:
+1. If there are multiple corresponding services marked with the same roteLabel, it will be forwarded randomly
+2. If the corresponding routeLabel does not have a corresponding service, the interface fails directly
+3. If the interface does not have a routeLabel, based on the original forwarding logic, it will not route to the service marked with a specific label
+
+### 2.2 Business Architecture
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;This feature is mainly to complete the Restful tenant isolation and forwarding function. The modules designed by the function point are as follows:
+
+| Component name | First-level module | Second-level module | Function point |
+|---|---|---|---|
+| Linkis | MG | Gateway| Parse the route label in the restful request parameters, and complete the forwarding function of the interface according to the route label|
+| Linkis | PS | InstanceLabel| InstanceLabel service, completes the association between services and labels|
+
+## 3. Module Design
+### Core execution flow
+[Input] The input is the restful request requesting Gatway, and only the request with the roure label to be used in the parameter will be processed.
+[Processing process] The Gateway will determine whether the request has a corresponding RouteLabel, and if it exists, it will be forwarded based on the RouteLabel.
+The call sequence diagram is as follows:
+
+![Time](/Images/Architecture/Gateway/service_isolation_time.png)
+
+
+
+## 4. DDL:
+```sql
+DROP TABLE IF EXISTS `linkis_ps_instance_label`;
+CREATE TABLE `linkis_ps_instance_label` (
+  `id` int(20) NOT NULL AUTO_INCREMENT,
+  `label_key` varchar(32) COLLATE utf8_bin NOT NULL COMMENT 'string key',
+  `label_value` varchar(255) COLLATE utf8_bin NOT NULL COMMENT 'string value',
+  `label_feature` varchar(16) COLLATE utf8_bin NOT NULL COMMENT 'store the feature of label, but it may be redundant',
+  `label_value_size` int(20) NOT NULL COMMENT 'size of key -> value map',
+  `update_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'update unix timestamp',
+  `create_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'update unix timestamp',
+  PRIMARY KEY (`id`),
+  UNIQUE KEY `label_key_value` (`label_key`,`label_value`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
+
+DROP TABLE IF EXISTS `linkis_ps_instance_info`;
+CREATE TABLE `linkis_ps_instance_info` (
+  `id` int(11) NOT NULL AUTO_INCREMENT,
+  `instance` varchar(128) COLLATE utf8_bin DEFAULT NULL COMMENT 'structure like ${host|machine}:${port}',
+  `name` varchar(128) COLLATE utf8_bin DEFAULT NULL COMMENT 'equal application name in registry',
+  `update_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT 'update unix timestamp',
+  `create_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT 'create unix timestamp',
+  PRIMARY KEY (`id`),
+  UNIQUE KEY `instance` (`instance`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
+
+DROP TABLE IF EXISTS `linkis_ps_instance_label_relation`;
+CREATE TABLE `linkis_ps_instance_label_relation` (
+  `id` int(20) NOT NULL AUTO_INCREMENT,
+  `label_id` int(20) DEFAULT NULL COMMENT 'id reference linkis_ps_instance_label -> id',
+  `service_instance` varchar(128) NOT NULL COLLATE utf8_bin COMMENT 'structure like ${host|machine}:${port}',
+  `update_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT 'update unix timestamp',
+  `create_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT 'create unix timestamp',
+  PRIMARY KEY (`id`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
+
+````
+## 5. How to use:
+
+### add route label for entrance
+
+````
+echo "spring.eureka.instance.metadata-map.route=et1" >> $LINKIS_CONF_DIR/linkis-cg-entrance.properties
+sh $LINKIS_HOME/sbin/linkis-damemon.sh restart cg-entrance
+````
+
+![Time](/Images/Architecture/Gateway/service_isolation_time.png)
+
+### Use route label
+submit task:
+````
+url:/api/v1/entrance/submit
+{
+    "executionContent": {"code": "echo 1", "runType": "shell"},
+    "params": {"variable": {}, "configuration": {}},
+    "source": {"scriptPath": "ip"},
+    "labels": {
+        "engineType": "shell-1",
+        "userCreator": "peacewong-IDE",
+        "route": "et1"
+    }
+}
+````
+will be routed to a fixed service:
+````
+{
+    "method": "/api/entrance/submit",
+    "status": 0,
+    "message": "OK",
+    "data": {
+        "taskID": 45158,
+        "execID": "exec_id018030linkis-cg-entrancelocalhost:9205IDE_peacewong_shell_0"
+    }
+}
+````
+
+or linkis-cli:
+
+````
+sh bin/linkis-cli -submitUser hadoop -engineType shell-1 -codeType shell -code "whoami" -labelMap route=et1 --gatewayUrl http://127.0.0.1:9101
+````
+
+### Use non-existing label
+submit task:
+````
+url:/api/v1/entrance/submit
+{
+    "executionContent": {"code": "echo 1", "runType": "shell"},
+    "params": {"variable": {}, "configuration": {}},
+    "source": {"scriptPath": "ip"},
+    "labels": {
+        "engineType": "shell-1",
+        "userCreator": "peacewong-IDE",
+        "route": "et1"
+    }
+}
+````
+
+will get the error
+````
+{
+    "method": "/api/rest_j/v1/entrance/submit",
+    "status": 1,
+    "message": "GatewayErrorException: errCode: 11011 ,desc: Cannot route to the corresponding service, URL: /api/rest_j/v1/entrance/submit RouteLabel: [{\"stringValue\":\"et2\",\" labelKey\":\"route\",\"feature\":null,\"modifiable\":true,\"featureKey\":\"feature\",\"empty\":false}] ,ip: localhost ,port: 9101 ,serviceKind: linkis-mg-gateway",
+    "data": {
+        "data": "{\r\n \"executionContent\": {\"code\": \"echo 1\", \"runType\": \"shell\"},\r\n \"params \": {\"variable\": {}, \"configuration\": {}},\r\n \"source\": {\"scriptPath\": \"ip\"},\r\ n \"labels\": {\r\n \"engineType\": \"shell-1\",\r\n \"userCreator\": \"peacewong-IDE\",\r\n \" route\": \"et2\"\r\n }\r\n}"
+    }
+}
+````
+
+### without label
+submit task:
+````
+url:/api/v1/entrance/submit
+{
+    "executionContent": {"code": "echo 1", "runType": "shell"},
+    "params": {"variable": {}, "configuration": {}},
+    "source": {"scriptPath": "ip"},
+    "labels": {
+        "engineType": "shell-1",
+        "userCreator": "peacewong-IDE"
+    }
+}
+````
+
+````
+
+will route to untagged entranceservices
+{
+    "method": "/api/entrance/submit",
+    "status": 0,
+    "message": "OK",
+    "data": {
+        "taskID": 45159,
+        "execID": "exec_id018018linkis-cg-entrancelocalhost2:9205IDE_peacewong_shell_0"
+    }
+}
+
+````
+
+## 6. Non-functional design:
+
+### 6.1 Security
+No security issues are involved, restful requires login authentication
+
+### 6.2 Performance
+It has little impact on Gateway forwarding performance, and caches the corresponding label and instance data
+
+### 6.3 Capacity
+not involving
+
+### 6.4 High Availability
+not involving
\ No newline at end of file
diff --git a/docs/architecture/public-enhancement-services/context-service/content-service-cleanup.md b/docs/architecture/public-enhancement-services/context-service/content-service-cleanup.md
index e12f2af08b..1988abb813 100644
--- a/docs/architecture/public-enhancement-services/context-service/content-service-cleanup.md
+++ b/docs/architecture/public-enhancement-services/context-service/content-service-cleanup.md
@@ -1,6 +1,6 @@
 ---
 title: CS Cleanup Interface Features
-sidebar_position: 7
+sidebar_position: 9
 tags: [Feature]
 ---
 
diff --git a/docs/architecture/public-enhancement-services/context-service/context-service-cache.md b/docs/architecture/public-enhancement-services/context-service/context-service-cache.md
index 164dbc2782..2bca0c8d5e 100644
--- a/docs/architecture/public-enhancement-services/context-service/context-service-cache.md
+++ b/docs/architecture/public-enhancement-services/context-service/context-service-cache.md
@@ -1,6 +1,6 @@
 ---
 title: CS Cache Architecture
-sidebar_position: 1
+sidebar_position: 8
 ---
 
 
diff --git a/docs/architecture/public-enhancement-services/context-service/context-service-search.md b/docs/architecture/public-enhancement-services/context-service/context-service-search.md
index 2d802d0157..9d65ea6012 100644
--- a/docs/architecture/public-enhancement-services/context-service/context-service-search.md
+++ b/docs/architecture/public-enhancement-services/context-service/context-service-search.md
@@ -1,6 +1,6 @@
 ---
 title: CS Search Architecture
-sidebar_position: 5
+sidebar_position: 6
 ---
 
 ## **CSSearch Architecture**
diff --git a/docs/release.md b/docs/release.md
index 8d21584d5b..a77bed96c3 100644
--- a/docs/release.md
+++ b/docs/release.md
@@ -8,6 +8,8 @@ sidebar_position: 0.1
 - [Integrate Knife4j and enable](/deployment/involve-knife4j-into-linkis.md)
 - [Data source function module interface optimization](/api/http/linkis-ps-publicservice-api/metadataquery-api.md)
 - [JDBC engine supports data source schema](/engine-usage/jdbc.md)
+- [EC History List Architecture Design](/architecture/computation-governance-services/linkis-manager/ec-history-arc.md)
+- [Service isolation Design](/architecture/microservice-governance-services/service_isolation.md)
 - [version of Release-Notes](/download/release-notes-1.2.0)
 
 
diff --git a/docs/tuning-and-troubleshooting/error-guide/interface.md b/docs/tuning-and-troubleshooting/error-guide/interface.md
index a7d6532488..91fbcaa979 100644
--- a/docs/tuning-and-troubleshooting/error-guide/interface.md
+++ b/docs/tuning-and-troubleshooting/error-guide/interface.md
@@ -49,60 +49,60 @@ The user service address is different. We need to locate the log address first
 ![](/Images/tuning-and-troubleshooting/error-guide/logs.png)
 
 - cg-linkismanager:
->GC log:` /data/bdp/logs/linkis/linkis-cg-linkismanager-gc.log`
+>GC log:` /${LINKIS_HOME}/logs/linkis/linkis-cg-linkismanager-gc.log`
 >
->Service log:` /data/bdp/logs/linkis/linkis-cg-linkismanager.log`
+>Service log:` /${LINKIS_HOME}/logs/linkis/linkis-cg-linkismanager.log`
 >
->System out log:` /data/bdp/logs/linkis/linkis-cg-linkismanager.out`
+>System out log:` /${LINKIS_HOME}/logs/linkis/linkis-cg-linkismanager.out`
 
 - cg-engineplugin:
->GC log:` /data/bdp/logs/linkis/linkis-cg-engineplugin-gc.log`
+>GC log:` /${LINKIS_HOME}/logs/linkis/linkis-cg-engineplugin-gc.log`
 >
->Service log:` /data/bdp/logs/linkis/linkis-cg-engineplugin.log`
+>Service log:` /${LINKIS_HOME}/logs/linkis/linkis-cg-engineplugin.log`
 >
->System out log:` /data/bdp/logs/linkis/linkis-cg-engineplugin.out`
+>System out log:` /${LINKIS_HOME}/logs/linkis/linkis-cg-engineplugin.out`
 
 - cg-engineconnmanager:
->GC log:` /data/bdp/logs/linkis/linkis-cg-engineconnmanager-gc.log`
+>GC log:` /${LINKIS_HOME}/logs/linkis/linkis-cg-engineconnmanager-gc.log`
 >
->Service log:` /data/bdp/logs/linkis/linkis-cg-engineconnmanager.log`
+>Service log:` /${LINKIS_HOME}/logs/linkis/linkis-cg-engineconnmanager.log`
 >
->System out log:` /data/bdp/logs/linkis/linkis-cg-engineconnmanager.out`
+>System out log:` /${LINKIS_HOME}/logs/linkis/linkis-cg-engineconnmanager.out`
 
 - cg-entrance:
->GC log:` /data/bdp/logs/linkis/linkis-cg-entrance-gc.log`
+>GC log:` /${LINKIS_HOME}/logs/linkis/linkis-cg-entrance-gc.log`
 >
->Service log:` /data/bdp/logs/linkis/linkis-cg-entrance.log`
+>Service log:` /${LINKIS_HOME}/logs/linkis/linkis-cg-entrance.log`
 >
->System out log:` /data/bdp/logs/linkis/linkis-cg-entrance.out`
+>System out log:` /${LINKIS_HOME}/logs/linkis/linkis-cg-entrance.out`
 
 - ps-bml:
->GC log:` /data/bdp/logs/linkis/linkis-ps-bml-gc.log`
+>GC log:` /${LINKIS_HOME}/logs/linkis/linkis-ps-bml-gc.log`
 >
->Service log:` /data/bdp/logs/linkis/linkis-ps-bml.log`
+>Service log:` /${LINKIS_HOME}/logs/linkis/linkis-ps-bml.log`
 >
->System out log:` /data/bdp/logs/linkis/linkis-ps-bml.out`
+>System out log:` /${LINKIS_HOME}/logs/linkis/linkis-ps-bml.out`
 
 - ps-cs:
->GC log:` /data/bdp/logs/linkis/linkis-ps-cs-gc.log`
+>GC log:` /${LINKIS_HOME}/logs/linkis/linkis-ps-cs-gc.log`
 >
->Service log:` /data/bdp/logs/linkis/linkis-ps-cs.log`
+>Service log:` /${LINKIS_HOME}/logs/linkis/linkis-ps-cs.log`
 >
->System out log:` /data/bdp/logs/linkis/linkis-ps-cs.out`
+>System out log:` /${LINKIS_HOME}/logs/linkis/linkis-ps-cs.out`
 
 - ps-datasource:
->GC log:` /data/bdp/logs/linkis/linkis-ps-datasource-gc.log`
+>GC log:` /${LINKIS_HOME}/logs/linkis/linkis-ps-datasource-gc.log`
 >
->Service log:` /data/bdp/logs/linkis/linkis-ps-datasource.log`
+>Service log:` /${LINKIS_HOME}/logs/linkis/linkis-ps-datasource.log`
 >
->System out log:` /data/bdp/logs/linkis/linkis-ps-datasource.out`
+>System out log:` /${LINKIS_HOME}/logs/linkis/linkis-ps-datasource.out`
 
 - ps-publicservice:
->GC log:` /data/bdp/logs/linkis/linkis-ps-publicservice-gc.log`
+>GC log:` /${LINKIS_HOME}/logs/linkis/linkis-ps-publicservice-gc.log`
 >
->Service log:` /data/bdp/logs/linkis/linkis-ps-publicservice.log`
+>Service log:` /${LINKIS_HOME}/logs/linkis/linkis-ps-publicservice.log`
 >
->System out log:` /data/bdp/logs/linkis/linkis-ps-publicservice.out`
+>System out log:` /${LINKIS_HOME}/logs/linkis/linkis-ps-publicservice.out`
 
 ###  4. view log
 Display the error message corresponding to the interface
diff --git a/docs/tuning-and-troubleshooting/overview.md b/docs/tuning-and-troubleshooting/overview.md
index 3d2ba065a5..6df8dac334 100644
--- a/docs/tuning-and-troubleshooting/overview.md
+++ b/docs/tuning-and-troubleshooting/overview.md
@@ -32,7 +32,7 @@ On the homepage of the github community, the issue column retains some of the pr
 
 ### Ⅲ. "Q\&A Question Summary"
 
-"Linkis 1.0 FAQ", this document contains a summary of common problems and solutions during the installation and deployment process.
+[FAQ](/faq/main), this document contains a summary of common problems and solutions during the installation and deployment process.
 
 ### Ⅳ. Locating system log
 
@@ -61,9 +61,6 @@ Generally, errors can be divided into three stages: an error is reported when in
     │ ├── linkis-cg-entrance
     │ └── linkis-cg-linkismanager
     ├── linkis-public-enhancements
-    │ ├── linkis-ps-bml
-    │ ├── linkis-ps-cs
-    │ ├── linkis-ps-datasource
     │ └── linkis-ps-publicservice
     └── linkis-spring-cloud-services
     │ ├── linkis-mg-eureka
diff --git a/docs/user-guide/sdk-manual.md b/docs/user-guide/sdk-manual.md
index 30f36b433c..15130ce520 100644
--- a/docs/user-guide/sdk-manual.md
+++ b/docs/user-guide/sdk-manual.md
@@ -55,18 +55,18 @@ public class LinkisClientTest {
             .discoveryFrequency(1, TimeUnit.MINUTES)  // discovery frequency
             .loadbalancerEnabled(true)  // enable loadbalance
             .maxConnectionSize(5)   // set max Connection
-            .retryEnabled(false) // set retry
+            .retryEnabled(true) // set retry
             .readTimeout(30000)  //set read timeout
             .setAuthenticationStrategy(new StaticAuthenticationStrategy())   //AuthenticationStrategy Linkis authen suppory static and Token
             .setAuthTokenKey("hadoop")  // set submit user
-            .setAuthTokenValue("hadoop")))  // set passwd or token (setAuthTokenValue("BML-AUTH"))
+            .setAuthTokenValue("hadoop")))  // set passwd or token (setAuthTokenValue("test"))
             .setDWSVersion("v1") //linkis rest version v1
             .build();
 
     // 2. new Client(Linkis Client) by clientConfig
     private static UJESClient client = new UJESClientImpl(clientConfig);
 
-    public static void main(String[] args){
+    public static void main(String[] args) {
 
         String user = "hadoop"; // execute user
         String executeCode = "df=spark.sql(\"show tables\")\n" +
@@ -76,14 +76,13 @@ public class LinkisClientTest {
             System.out.println("user : " + user + ", code : [" + executeCode + "]");
             // 3. build job and execute
             JobExecuteResult jobExecuteResult = toSubmit(user, executeCode);
-            //0.x:JobExecuteResult jobExecuteResult = toExecute(user, executeCode);
             System.out.println("execId: " + jobExecuteResult.getExecID() + ", taskId: " + jobExecuteResult.taskID());
             // 4. get job jonfo
             JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
             int sleepTimeMills = 1000;
             int logFromLen = 0;
             int logSize = 100;
-            while(!jobInfoResult.isCompleted()) {
+            while (!jobInfoResult.isCompleted()) {
                 // 5. get progress and log
                 JobProgressResult progress = client.progress(jobExecuteResult);
                 System.out.println("progress: " + progress.getProgress());
@@ -100,24 +99,23 @@ public class LinkisClientTest {
             // multiple result sets will be generated)
             String resultSet = jobInfo.getResultSetList(client)[0];
             // 7. get resultContent
-            Object fileContents = client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build()).getFileContent();
-            System.out.println("res: " + fileContents);
+            ResultSetResult resultSetResult = client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build());
+            System.out.println("metadata: " + resultSetResult.getMetadata()); // column name type
+            System.out.println("res: " + resultSetResult.getFileContent()); //row data
         } catch (Exception e) {
-            e.printStackTrace();
+            e.printStackTrace();// please use log
             IOUtils.closeQuietly(client);
         }
         IOUtils.closeQuietly(client);
     }
 
-    /**
-     * Linkis 1.0 recommends the use of Submit method
-     */
+   
     private static JobExecuteResult toSubmit(String user, String code) {
         // 1. build  params
         // set label map :EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
         Map<String, Object> labels = new HashMap<String, Object>();
         labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // required engineType Label
-        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-APPName");// required execute user and creator
+        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-APPName");// required execute user and creator eg:hadoop-IDE
         labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // required codeType
         // set start up map :engineConn start params
         Map<String, Object> startupMap = new HashMap<String, Object>(16);
@@ -137,38 +135,6 @@ public class LinkisClientTest {
         // 3. to execute
         return client.submit(jobSubmitAction);
     }
-
-    /**
-     * Compatible with 0.X execution mode
-     */
-    private static JobExecuteResult toExecute(String user, String code) {
-        // 1. build  params
-        // set label map :EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
-        Map<String, Object> labels = new HashMap<String, Object>();
-        // labels.put(LabelKeyConstant.TENANT_KEY, "fate");
-        // set start up map :engineConn start params
-        Map<String, Object> startupMap = new HashMap<String, Object>(16);
-        // Support setting engine native parameters,For example: parameters of engines such as spark/hive
-        startupMap.put("spark.executor.instances", 2);
-        // setting linkis params
-        startupMap.put("wds.linkis.rm.yarnqueue", "dws");
-
-        // 2. build JobExecuteAction (0.X old way of using)
-        JobExecuteAction executionAction = JobExecuteAction.builder()
-                .setCreator("APPName")  //creator, the system name of the client requesting linkis, used for system-level isolation
-                .addExecuteCode(code)   //Execution Code
-                .setEngineTypeStr("spark") // engineConn type
-                .setRunTypeStr("py") // code type
-                .setUser(user)   //execute user
-                .setStartupParams(startupMap) // start up params
-                .build();
-        executionAction.addRequestPayload(TaskConstant.LABELS, labels);
-        String body = executionAction.getRequestPayload();
-        System.out.println(body);
-
-        // 3. to execute
-        return client.execute(executionAction);
-    }
 }
 ```
 
@@ -180,34 +146,32 @@ Create the Scala test class LinkisClientTest. Refer to the comments to understan
 ```scala
 package org.apache.linkis.client.test
 
-import java.util
-import java.util.concurrent.TimeUnit
-
+import org.apache.commons.io.IOUtils
+import org.apache.commons.lang3.StringUtils
 import org.apache.linkis.common.utils.Utils
 import org.apache.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy
 import org.apache.linkis.httpclient.dws.config.DWSClientConfigBuilder
 import org.apache.linkis.manager.label.constant.LabelKeyConstant
-import org.apache.linkis.protocol.constants.TaskConstant
-import org.apache.linkis.ujes.client.UJESClient
 import org.apache.linkis.ujes.client.request._
 import org.apache.linkis.ujes.client.response._
-import org.apache.commons.io.IOUtils
-import org.apache.commons.lang.StringUtils
+
+import java.util
+import java.util.concurrent.TimeUnit
 
 object LinkisClientTest {
-// 1. build config: linkis gateway url
+  // 1. build config: linkis gateway url
   val clientConfig = DWSClientConfigBuilder.newBuilder()
-    .addServerUrl("http://127.0.0.1:9001/")   //set linkis-mg-gateway url: http://{ip}:{port}
-    .connectionTimeout(30000)   //connectionTimeOut
+    .addServerUrl("http://127.0.0.1:9001/") //set linkis-mg-gateway url: http://{ip}:{port}
+    .connectionTimeout(30000) //connectionTimeOut
     .discoveryEnabled(false) //disable discovery
-    .discoveryFrequency(1, TimeUnit.MINUTES)  // discovery frequency
-    .loadbalancerEnabled(true)  // enable loadbalance
-    .maxConnectionSize(5)   // set max Connection
+    .discoveryFrequency(1, TimeUnit.MINUTES) // discovery frequency
+    .loadbalancerEnabled(true) // enable loadbalance
+    .maxConnectionSize(5) // set max Connection
     .retryEnabled(false) // set retry
-    .readTimeout(30000)  //set read timeout
-    .setAuthenticationStrategy(new StaticAuthenticationStrategy())   //AuthenticationStrategy Linkis authen suppory static and Token
-    .setAuthTokenKey("hadoop")  // set submit user
-    .setAuthTokenValue("hadoop")  // set passwd or token (setAuthTokenValue("BML-AUTH"))
+    .readTimeout(30000) //set read timeout
+    .setAuthenticationStrategy(new StaticAuthenticationStrategy()) //AuthenticationStrategy Linkis authen suppory static and Token
+    .setAuthTokenKey("hadoop") // set submit user
+    .setAuthTokenValue("hadoop") // set passwd or token (setAuthTokenValue("BML-AUTH"))
     .setDWSVersion("v1") //linkis rest version v1
     .build();
 
@@ -215,25 +179,25 @@ object LinkisClientTest {
   val client = UJESClient(clientConfig)
 
   def main(args: Array[String]): Unit = {
-    val user = "hadoop" // execute user
+    val user = "hadoop" // execute user 用户需要和AuthTokenKey的值保持一致
     val executeCode = "df=spark.sql(\"show tables\")\n" +
       "show(df)"; // code support:sql/hql/py/scala
     try {
       // 3. build job and execute
       println("user : " + user + ", code : [" + executeCode + "]")
+      // 推荐使用submit,支持传递任务label
       val jobExecuteResult = toSubmit(user, executeCode)
-      //0.X: val jobExecuteResult = toExecute(user, executeCode)
       println("execId: " + jobExecuteResult.getExecID + ", taskId: " + jobExecuteResult.taskID)
       // 4. get job jonfo
       var jobInfoResult = client.getJobInfo(jobExecuteResult)
       var logFromLen = 0
       val logSize = 100
-      val sleepTimeMills : Int = 1000
+      val sleepTimeMills: Int = 1000
       while (!jobInfoResult.isCompleted) {
         // 5. get progress and log
         val progress = client.progress(jobExecuteResult)
-       println("progress: " + progress.getProgress)
-        val logObj = client .log(jobExecuteResult, logFromLen, logSize)
+        println("progress: " + progress.getProgress)
+        val logObj = client.log(jobExecuteResult, logFromLen, logSize)
         logFromLen = logObj.fromLine
         val logArray = logObj.getLog
         // 0: info 1: warn 2: error 3: all
@@ -256,27 +220,25 @@ object LinkisClientTest {
       resultSetList.foreach(println)
       val oneResultSet = jobInfo.getResultSetList(client).head
       // 7. get resultContent
-      val fileContents = client.resultSet(ResultSetAction.builder().setPath(oneResultSet).setUser(jobExecuteResult.getUser).build()).getFileContent
-      println("First fileContents: ")
-      println(fileContents)
+      val resultSetResult: ResultSetResult = client.resultSet(ResultSetAction.builder.setPath(oneResultSet).setUser(jobExecuteResult.getUser).build)
+      println("metadata: " + resultSetResult.getMetadata) // column name type
+      println("res: " + resultSetResult.getFileContent) //row data
     } catch {
       case e: Exception => {
-        e.printStackTrace()
+        e.printStackTrace() //please use log
       }
     }
     IOUtils.closeQuietly(client)
   }
 
-  /**
-   * Linkis 1.0 recommends the use of Submit method
-   */
+
   def toSubmit(user: String, code: String): JobExecuteResult = {
     // 1. build  params
     // set label map :EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
     val labels: util.Map[String, Any] = new util.HashMap[String, Any]
     labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // required engineType Label
-    labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-APPName");// required execute user and creator
-    labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // required codeType
+    labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-APPName"); // 请求的用户和应用名,两个参数都不能少,其中APPName不能带有"-"建议替换为"_"
+    labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // 指定脚本类型
 
     val startupMap = new java.util.HashMap[String, Any]()
     // Support setting engine native parameters,For example: parameters of engines such as spark/hive
@@ -294,36 +256,5 @@ object LinkisClientTest {
     // 3. to execute
     client.submit(jobSubmitAction)
   }
-
-
-  /**
-   * Compatible with 0.X execution mode
-   */
-  def toExecute(user: String, code: String): JobExecuteResult = {
-    // 1. build  params
-    // set label map :EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
-    val labels = new util.HashMap[String, Any]
-    // labels.put(LabelKeyConstant.TENANT_KEY, "fate");
-
-    val startupMap = new java.util.HashMap[String, Any]()
-    // Support setting engine native parameters,For example: parameters of engines such as spark/hive
-    startupMap.put("spark.executor.instances", 2)
-    // setting linkis params
-    startupMap.put("wds.linkis.rm.yarnqueue", "default")
-    // 2. build JobExecuteAction (0.X old way of using)
-    val  executionAction = JobExecuteAction.builder()
-      .setCreator("APPName")  //creator, the system name of the client requesting linkis, used for system-level isolation
-      .addExecuteCode(code)   //Execution Code
-      .setEngineTypeStr("spark") // engineConn type
-      .setRunTypeStr("py") // code type
-      .setUser(user)   //execute user
-      .setStartupParams(startupMap) // start up params
-      .build();
-    executionAction.addRequestPayload(TaskConstant.LABELS, labels);
-    // 3. to execute
-    client.execute(executionAction)
-  }
-
-
 }
 ```
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/api/http/linkis-cg-linkismanager-api/ec-resource-management-api.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/api/http/linkis-cg-linkismanager-api/ec-resource-management-api.md
index 56538f15da..c10e93d3f0 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/api/http/linkis-cg-linkismanager-api/ec-resource-management-api.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/api/http/linkis-cg-linkismanager-api/ec-resource-management-api.md
@@ -126,3 +126,166 @@ sidebar_position: 4
 }
 ```
 
+## 搜索历史EC信息
+
+
+**接口地址**:`/api/rest_j/v1/linkisManager/ecinfo/ecrHistoryList`
+
+
+**请求方式**:`GET`
+
+
+**请求数据类型**:`application/x-www-form-urlencoded`
+
+
+**响应数据类型**:`application/json`
+
+
+**接口描述**:<p>获取EC信息</p>
+
+
+
+**请求参数**:
+
+
+| 参数名称 | 参数说明 | 请求类型    | 是否必须 | 数据类型 | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|instance|instance|query|false|string|
+|creator|creator|query|false|string|
+|startDate|startDate|query|false|string|
+|endDate|endDate|query|false|string|
+|engineType|engineType|query|false|string|
+|pageNow|pageNow|query|false|Int|
+|pageSize|pageSize|query|false|Int|
+
+
+**响应状态**:
+
+
+| 状态码 | 说明 | schema |
+| -------- | -------- | ----- | 
+|200|OK|Message|
+|401|Unauthorized|
+|403|Forbidden|
+|404|Not Found|
+
+
+**响应参数**:
+
+
+| 参数名称 | 参数说明 | 类型 | schema |
+| -------- | -------- | ----- |----- | 
+|data|数据集|object|
+|message|描述|string|
+|method|请求url|string|
+|status|状态|integer(int32)|integer(int32)|
+
+
+**响应示例**:
+```javascript
+{
+    {
+        "message": "",
+        "status": 0,
+        "data": {
+        "engineList": [
+            {
+                "id": -94209540.07806732,
+                "requestResource": "consectetur dolor eiusmod ipsum",
+                "releasedResource": "est in id Ut",
+                "usedTimes": -51038117.02855969,
+                "ticketId": "adipisicing in nostrud do",
+                "ecmInstance": "id magna Lorem eiusmod",
+                "engineType": "dolor",
+                "usedTime": -38764910.74278392,
+                "logDirSuffix": "sunt eiusmod aute et",
+                "releaseTime": -33417043.232267484,
+                "usedResource": "in amet veniam velit",
+                "requestTimes": -15643696.319572791,
+                "labelValue": "veniam incididunt magna",
+                "releaseTimes": 96384811.3484546,
+                "createTime": 38434279.49900183,
+                "serviceInstance": "consequat aliqua in enim",
+                "createUser": "Lorem Ut occaecat amet"
+            },
+            {
+                "labelValue": "adipisicing deserunt do",
+                "usedTimes": 49828407.223826766,
+                "usedResource": "mollit laboris cupidatat enim",
+                "releaseTimes": -73400915.22672182,
+                "releasedResource": "est qui id ipsum mollit",
+                "requestResource": "deserunt reprehenderit ut",
+                "serviceInstance": "do laborum",
+                "requestTimes": -33074164.700212136,
+                "ecmInstance": "dolore",
+                "logDirSuffix": "ea incididunt",
+                "createUser": "Ut exercitation officia dolore ipsum",
+                "usedTime": 25412456.522457644,
+                "createTime": -93227549.70578489,
+                "id": -84032815.0589972,
+                "ticketId": "eu in mollit do",
+                "engineType": "non Ut eu",
+                "releaseTime": 34923394.9602966
+            },
+            {
+                "releaseTime": -91057731.93164417,
+                "usedTime": 99226623.97199354,
+                "id": 59680041.603043556,
+                "requestResource": "officia Ut enim nulla",
+                "usedTimes": -14680356.219609797,
+                "logDirSuffix": "proident amet reprehenderit tempor",
+                "ticketId": "minim esse",
+                "releaseTimes": 37270921.94107443,
+                "serviceInstance": "enim adipisicing cupidatat",
+                "createUser": "culpa",
+                "requestTimes": -33504917.797325186,
+                "releasedResource": "et dolore quis",
+                "ecmInstance": "elit dolor adipisicing id",
+                "createTime": -38827280.78902944,
+                "engineType": "ullamco in eiusmod reprehenderit aute",
+                "labelValue": "dolore qui labore nulla laboris",
+                "usedResource": "irure sint nostrud Excepteur sunt"
+            },
+            {
+                "requestResource": "deserunt incididunt enim",
+                "releaseTimes": -16708903.732444778,
+                "id": 80588551.12419662,
+                "ecmInstance": "et veniam",
+                "releaseTime": -50240956.63233949,
+                "usedTimes": -5348294.728038415,
+                "labelValue": "incididunt tempor reprehenderit quis eu",
+                "createUser": "in in",
+                "serviceInstance": "minim sit",
+                "ticketId": "in dolore",
+                "usedTime": -30138563.761232898,
+                "logDirSuffix": "quis laborum ea",
+                "createTime": 65920455.93896958,
+                "requestTimes": 38810152.0160971,
+                "engineType": "est in Ut proident",
+                "usedResource": "nulla laboris Ut",
+                "releasedResource": "cupidatat irure"
+            },
+            {
+                "usedResource": "Lorem adipisicing dolor",
+                "createTime": -11906770.11266476,
+                "labelValue": "et id magna",
+                "releaseTimes": 32546901.20497243,
+                "id": -90442428.4679744,
+                "logDirSuffix": "aute ut eu commodo",
+                "ticketId": "cillum sint non deserunt",
+                "requestResource": "non velit sunt consequat culpa",
+                "requestTimes": -75282512.0022062,
+                "usedTime": 6378131.554130778,
+                "releasedResource": "Duis in",
+                "serviceInstance": "dolore ut officia",
+                "usedTimes": 70810503.51038182,
+                "createUser": "voluptate sed",
+                "ecmInstance": "laboris do sit dolore ipsum",
+                "engineType": "id",
+                "releaseTime": 37544575.30154848
+            }
+        ]
+    }
+    }
+```
+
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation-governance-services/job-submission-preparation-and-execution-process.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation-governance-services/job-submission-preparation-and-execution-process.md
index 202930a803..6abc98fcbf 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation-governance-services/job-submission-preparation-and-execution-process.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation-governance-services/job-submission-preparation-and-execution-process.md
@@ -3,170 +3,179 @@ title: Linkis任务执行流程
 sidebar_position: 1
 ---
 
-计算任务(Job)的提交执行是Linkis提供的核心能力,它几乎串通了Linkis计算治理架构中的所有模块,在Linkis之中占据核心地位。
-
-我们将用户的计算任务从客户端提交开始,到最后的返回结果为止,整个流程分为三个阶段:提交 -> 准备 -> 执行,如下图所示:
-
-![计算任务整体流程图](/Images-zh/Architecture/Job提交准备执行流程/计算任务整体流程图.png)
-
-其中:
-
-- Entrance作为提交阶段的入口,提供任务的接收、调度和Job信息的转发能力,是所有计算型任务的统一入口,它将把计算任务转发给Orchestrator进行编排和执行;
-
-- Orchestrator作为准备阶段的入口,主要提供了Job的解析、编排和执行能力。。
-
-- Linkis Manager:是计算治理能力的管理中枢,主要的职责为:
-  
-  1. ResourceManager:不仅具备对Yarn和Linkis EngineConnManager的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让ResourceManager具备跨集群、跨计算资源类型的全资源管理能力;
-  
-  2. AppManager:统筹管理所有的EngineConnManager和EngineConn,包括EngineConn的申请、复用、创建、切换、销毁等生命周期全交予AppManager进行管理;
-  
-  3. LabelManager:将基于多级组合标签,为跨IDC、跨集群的EngineConn和EngineConnManager路由和管控能力提供标签支持;
-  
-  4. EngineConnPluginServer:对外提供启动一个EngineConn的所需资源生成能力和EngineConn的启动命令生成能力。
-
-- EngineConnManager:是EngineConn的管理器,提供引擎的生命周期管理,同时向RM汇报负载信息和自身的健康状况。
-
-- EngineConn:是Linkis与底层计算存储引擎的实际连接器,用户所有的计算存储任务最终都会交由EngineConn提交给底层计算存储引擎。根据用户的不同使用场景,EngineConn提供了交互式计算、流式计算、离线计算、数据存储任务的全栈计算能力框架支持。
-
-接下来,我们将详细介绍计算任务从 提交 -> 准备 -> 执行 的三个阶段。
-
-## 一、提交阶段
-
-提交阶段主要是Client端 -> Linkis Gateway -> Entrance的交互,其流程如下:
-
-![提交阶段流程图](/Images-zh/Architecture/Job提交准备执行流程/提交阶段流程图.png)
-
-1. 首先,Client(如前端或客户端)发起Job请求,Job请求信息精简如下
-(关于Linkis的具体使用方式,请参考 [如何使用Linkis](../../user-guide/how-to-use.md)):
-
-```
-POST /api/rest_j/v1/entrance/submit
-```
-
-```json
-{
-    "executionContent": {"code": "show tables", "runType": "sql"},
-    "params": {"variable": {}, "configuration": {}},  //非必须
-    "source": {"scriptPath": "file:///1.hql"}, //非必须,仅用于记录代码来源
-    "labels": {
-        "engineType": "spark-2.4.3",  //指定引擎
-        "userCreator": "johnnwnag-IDE"  // 指定提交用户和提交系统
+>  Linkis任务执行是Linkis的核心功能,调用到Linkis的计算治理服务、公共增强服务,微服务治理的三层服务,现在已经支持了OLAP、OLTP、Streaming等引擎类型的任务执行,本文将对OLAP类型引擎的任务提交、准备、执行、结果返回等流程进行介绍。
+
+## 关键名词:
+LinkisMaster:Linkis的计算治理服务层架中的管理服务,主要包含了AppManager、ResourceManager、LabelManager等几个管控服务。原名LinkisManager服务
+Entrance:计算治理服务层架中的入口服务,完成任务的调度、状态管控、任务信息推送等功能
+Orchestrator:Linkis的编排服务,提供强大的编排和计算策略能力,满足多活、主备、事务、重放、限流、异构和混算等多种应用场景的需求。现阶段Orchestrator被Entrance服务所依赖
+EngineConn(EC):引擎连接器,负责接受任务并提交给底层引擎如Spark、hive、Flink、Presto、trino等进行执行
+EngineConnManager(ECM):Linkis 的EC进程管理服务,负责管控EngineConn的生命周期(启动、停止)
+LinkisEnginePluginServer:该服务负责管理各个引擎的启动物料和配置,另外提供每个EngineConn的启动命令获取,以及每个EngineConn所需要的资源
+PublicEnhencementService(PES): 公共增强服务,为其他微服务模块提供统一配置管理、上下文服务、物料库、数据源管理、微服务管理和历史任务查询等功能的模块
+
+## 一、Linkis交互式任务执行架构
+### 1.1、任务执行思考
+&nbsp;&nbsp;&nbsp;&nbsp;在现有Linkis1.0任务执行架构之前,也经历了多次演变,从最开始用户一多起来就各种FullGC导致服务崩溃,到用户开发的脚本如何支持多平台、多租户、强管控、高并发运行,我们遇见了如下几个问题:
+1. 如何支持租户户的上万并发并互相隔离?
+2. 如何支持上下文统一 ,用户定义的UDF、自定义变量等支持多个系统使用?
+3. 如何支持高可用,做到用户提交的任务能够正常运行完?
+4. 如何支持任务的底层引擎日志、进度、状态能够实时推送给前端?
+5. 如何支持多种类型的任务提交sql、python、shell、scala、java等
+
+### 1.2、Linkis任务执行设计
+&nbsp;&nbsp;&nbsp;&nbsp;基于以上5个问题出发,Linkis将OLTP任务分成了四个阶段,分别是:
+1. 提交阶段:APP提交到Linkis的CG-Entrance服务到完成任务的持久化(PS-JobHistory)以及任务的各种拦截器处理(危险语法、变量替换、参数检查)等步骤,并做生产者消费者的并发控制;
+2. 准备阶段:任务在Entrance被Scheduler消费调度给Orchestrator模块进行任务的编排、并向LinkisMaster完成EngineConn的申请,在这过程中会对租户的资源进行管控;
+3. 执行阶段:任务从Orchestrator提交给EngineConn执行,EngineConn具体提交底层引擎进行执行,并实时将任务的信息推送给调用方;
+4. 结果返回阶段:向调用方返回结果,支持json和io流返回结果集
+   Linkis的整体任务执行架构如下图所示:
+   ![arc](/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_arc.png)
+
+## 二、任务执行流程介绍
+&nbsp;&nbsp;&nbsp;&nbsp;首先我们先对OLAP型任务的处理流程进行一个简要介绍,任务整体的一个执行流程如下图所示:
+![flow](/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_flow.png)
+
+&nbsp;&nbsp;&nbsp;&nbsp;整个任务涉及到了所有的计算治理的所有服务,任务通过Gateway转发到Linkis的人口服务Entrance后,会通过对任务的标签进行多级调度(生产者消费者模式)通过FIFO的模式完成任务的调度执行,Entrance接着将任务提交给Orchestrator进行任务编排和提交,Orchestrator会向LinkisMaster完成EC的申请,在这过程中会通过任务的Label进行资源管控和引擎版本选择申请不同的EC。接着Orchestrator将编排后的任务提交给EC进行执行,EC会将job的日志、进度、资源使用等信息推动给Entrance服务,并推送给调用方。下面我们基于上图和结合任务的四个阶段(提交、准备、执行、返回)对任务的执行流程进行一个简要介绍。
+
+
+### 2.1 Job提交阶段
+&nbsp;&nbsp;&nbsp;&nbsp;Job提交阶段Linkis支持多种类型的任务:SQL, Python, Shell, Scala, Java等,支持不同的提交接口,支持Restful/JDBC/Python/Shell等提交接口。提交任务主要包含任务代码、标签、参数等信息即可,下面是一个RestFul的示例:
+通过Restfu接口发起一个Spark Sql任务
+```JSON
+	"method": "/api/rest_j/v1/entrance/submit",
+	"data": {
+  "executionContent": {
+    "code": "select * from table01",
+    "runType": "sql"
+  },
+  "params": {
+    "variable": {// task variable 
+      "testvar": "hello"
+    },
+    "configuration": {
+      "runtime": {// task runtime params 
+        "jdbc.url": "XX"
+      },
+      "startup": { // ec start up params 
+        "spark.executor.cores": "4"
+      }
     }
+  },
+  "source": { //task source information
+    "scriptPath": "file:///tmp/hadoop/test.sql"
+  },
+  "labels": {
+    "engineType": "spark-2.4.3",
+    "userCreator": "hadoop-IDE"
+  }
 }
 ```
+1. 任务首先会提交给Linkis的网关linkis-mg-gateway服务,Gateway会通过任务中是否带有routeLabel来转发给对应的Entrance服务,如果没有RouteLabel则随机转发给一个Entrance服务
+2. Entrance接受到对应的Job后,会调用PES中JobHistory模块的RPC对Job的信息进行持久化,并对参数和代码进性解析对自定义变量进行替换,并提交给调度器(默认FIFO调度)调度器会通过任务的标签进行分组,标签不同的任务互相不影响调度。
+3. Entrance在通过FIFO调度器消费后会提交给Orchestrator进行编排执行,就完成了任务的提交阶段
+   主要涉及的类简单说明:
+```
+EntranceRestfulApi: 入口服务的Controller类,任务提交、状态、日志、结果、job信息、任务kill等操作
+EntranceServer:任务的提交入口,完成任务的持久化、任务拦截解析(EntranceInterceptors)、提交给调度器
+EntranceContext:Entrance的上下文持有类,包含获取调度器、任务解析拦截器、logManager、持久化、listenBus等方法
+FIFOScheduler: FIFO调度器,用于调度任务
+EntranceExecutor:调度的执行器,任务调度后会提交给EntranceExecutor进行执行
+EntranceJob:调度器调度的job任务,通过EntranceParser解析用户提交的JobRequest进行生成和JobRequest一一对应
+```
+此时任务状态为排队状态
 
-2. Linkis-Gateway接收到请求后,根据URI ``/api/rest_j/v1/${serviceName}/.+``中的serviceName,确认路由转发的微服务名,这里Linkis-Gateway会解析出微服务名为entrance,将Job请求转发给Entrance微服务。需要说明的是:如果用户指定了路由标签,则在转发时,会根据路由标签选择打了相应标签的Entrance微服务实例进行转发,而不是随机转发。
-
-3. Entrance接收到Job请求后,会先简单校验请求的合法性,然后通过RPC调用JobHistory对Job的信息进行持久化,然后将Job请求封装为一个计算任务,放入到调度队列之中,等待被消费线程消费。
-
-4. 调度队列会为每个组开辟一个消费队列 和 一个消费线程,消费队列用于存放已经初步封装的用户计算任务,消费线程则按照FIFO的方式,不断从消费队列中取出计算任务进行消费。目前默认的分组方式为 Creator + User(即提交系统 + 用户),因此,即便是同一个用户,只要是不同的系统提交的计算任务,其实际的消费队列和消费线程都完全不同,完全隔离互不影响。(温馨提示:用户可以按需修改分组算法)
-
-5. 消费线程取出计算任务后,会将计算任务提交给Orchestrator,由此正式进入准备阶段。
-
-## 二、 准备阶段
-
-准备阶段主要有两个流程,一是向LinkisManager申请一个可用的EngineConn,用于接下来的计算任务提交执行,二是Orchestrator对Entrance提交过来的计算任务进行编排,将一个用户计算请求,通过编排转换成一个物理执行树,然后交给第三阶段的执行阶段去真正提交执行。
-
-#### 2.1 向LinkisManager申请可用EngineConn
-
-如果在LinkisManager中,该用户存在可复用的EngineConn,则直接锁定该EngineConn,并返回给Orchestrator,整个申请流程结束。
-
-如何定义可复用EngineConn?指能匹配计算任务的所有标签要求的,且EngineConn本身健康状态为Healthy(负载低且实际EngineConn状态为Idle)的,然后再按规则对所有满足条件的EngineConn进行排序选择,最终锁定一个最佳的EngineConn。
-
-如果该用户不存在可复用的EngineConn,则此时会触发EngineConn新增流程,关于EngineConn新增流程,请参阅:[EngineConn新增流程](engine/add-an-engine-conn.md) 。
-
-#### 2.2 计算任务编排
-
-Orchestrator主要负责将一个计算任务(JobReq),编排成一棵可以真正执行的物理执行树(PhysicalTree),并提供Physical树的执行能力。
-
-这里先重点介绍Orchestrator的计算任务编排能力,如下图:
-
-![编排流程图](/Images-zh/Architecture/Job提交准备执行流程/编排流程图.png)
-
-其主要流程如下:
-
-- Converter(转换):完成对用户提交的JobReq(任务请求)转换为Orchestrator的ASTJob,该步骤会对用户提交的计算任务进行参数检查和信息补充,如变量替换等;
-
-- Parser(解析):完成对ASTJob的解析,将ASTJob拆成由ASTJob和ASTStage组成的一棵AST树。
-
-- Validator(校验): 完成对ASTJob和ASTStage的检验和信息补充,如代码检查、必须的Label信息补充等。
-
-- Planner(计划):将一棵AST树转换为一棵Logical树。此时的Logical树已经由LogicalTask组成,包含了整个计算任务的所有执行逻辑。
-
-- Optimizer(优化阶段):将一棵Logical树转换为Physica树,并对Physical树进行优化。
-
-一棵Physical树,其中的很多节点都是计算策略逻辑,只有中间的ExecTask,才真正封装了将用户计算任务提交给EngineConn进行提交执行的执行逻辑。如下图所示:
-
-![Physical树](/Images-zh/Architecture/Job提交准备执行流程/Physical树.png)
-
-不同的计算策略,其Physical树中的JobExecTask 和 StageExecTask所封装的执行逻辑各不相同。
-
-如多活计算策略下,用户提交的一个计算任务,其提交给不同集群的EngineConn进行执行的执行逻辑封装在了两个ExecTask中,而相关的多活策略逻辑则体现在了两个ExecTask的父节点StageExecTask(End)之中。
-
-这里举多活计算策略下的多读场景。
-
-多读时,实际只要求一个ExecTask返回结果,该Physical树就可以标记为执行成功并返回结果了,但Physical树只具备按依赖关系进行依次执行的能力,无法终止某个节点的执行,且一旦某个节点被取消执行或执行失败,则整个Physical树其实会被标记为执行失败,这时就需要StageExecTask(End)来做一些特殊的处理,来保证既可以取消另一个ExecTask,又能把执行成功的ExecTask所产生的结果集继续往上传,让Physical树继续往上执行。这就是StageExecTask所代表的计算策略执行逻辑。
-
-Linkis Orchestrator的编排流程与很多SQL解析引擎(如Spark、Hive的SQL解析器)存在相似的地方,但实际上,Linkis Orchestrator是面向计算治理领域针对用户不同的计算治理需求,而实现的解析编排能力,而SQL解析引擎是面向SQL语言的解析编排。这里做一下简单区分:
-
-1. Linkis Orchestrator主要想解决的,是不同计算任务对计算策略所引发出的编排需求。如:用户想具备多活的能力,则Orchestrator会为用户提交的一个计算任务,基于“多活”的计算策略需求,编排出一棵Physical树,从而做到往多个集群去提交执行这个计算任务,并且在构建整个Physical树的过程中,已经充分考虑了各种可能存在的异常场景,并都已经体现在了Physical树中。
-
-2. Linkis Orchestrator的编排能力与编程语言无关,理论上只要是Linkis已经对接的引擎,其支持的所有编程语言都支持编排;而SQL解析引擎只关心SQL的解析和执行,只负责将一条SQL解析成一颗可执行的Physical树,最终计算出结果。
-
-3. Linkis Orchestrator也具备对SQL的解析能力,但SQL解析只是Orchestrator Parser针对SQL这种编程语言的其中一种解析实现。Linkis Orchestrator的Parser也考虑引入Apache Calcite对SQL进行解析,支持将一条跨多个计算引擎(必须是Linkis已经对接的计算引擎)的用户SQL,拆分成多条子SQL,在执行阶段时分别提交给对应的计算引擎进行执行,最后选择一个合适的计算引擎进行汇总计算。
-
-<!--
-#todo  Orchestrator文档还没准备好!!
-关于Orchestrator的编排详细介绍,请参考:[Orchestrator架构设计]()
--->
-
-经过了Linkis Orchestrator的解析编排后,用户的计算任务已经转换成了一颗可被执行的Physical树。Orchestrator会将该Physical树提交给Orchestrator的Execution模块,进入最后的执行阶段。
-
-## 三、执行阶段
-
-执行阶段主要分为如下两步,这两步是Linkis Orchestrator提供的最后两阶段的能力:
-
-![执行阶段流程图](/Images-zh/Architecture/Job提交准备执行流程/执行阶段流程图.png)
-
-其主要流程如下:
-
-- Execution(执行):解析Physical树的依赖关系,按照依赖从叶子节点开始依次执行。
-
-- Reheater(再热):一旦Physical树有节点执行完成,都会触发一次再热。再热允许依照Physical树的实时执行情况,动态调整Physical树,继续进行执行。如:检测到某个叶子节点执行失败,且该叶子节点支持重试(如失败原因是抛出了ReTryExecption),则自动调整Physical树,在该叶子节点上面添加一个内容完全相同的重试父节点。
-
-我们回到Execution阶段,这里重点介绍封装了将用户计算任务提交给EngineConn的ExecTask节点的执行逻辑。
-
-1. 前面有提到,准备阶段的第一步,就是向LinkisManager获取一个可用的EngineConn,ExecTask拿到这个EngineConn后,会通过RPC请求,将用户的计算任务提交给EngineConn。
-
-2. EngineConn接收到计算任务之后,会通过线程池异步提交给底层的计算存储引擎,然后马上返回一个执行ID。
-
-3. ExecTask拿到这个执行ID后,后续可以通过该执行ID异步去拉取计算任务的执行情况(如:状态、进度、日志、结果集等)。
+### 2.2 Job准备阶段
+&nbsp;&nbsp;&nbsp;&nbsp;Entrance的调度器,会按照Job中的Label生成不同的消费器去消费任务,任务被消费修改状态为Running时开始进入准备状态,到对应的任务后就是任务的准备阶段开始了。主要涉及以下几个服务:Entrance、LinkisMaster、EnginepluginServer、EngineConnManager、EngineConn,下面将对以下服务进行分开介绍。
+### 2.2.1 Entrance步骤:
+1. 消费器(FIFOUserConsumer)通过对应标签配置的支持并发数进行消费将任务消费调度给编排器(Orchestrator)进行执行
+2. 首先是Orchestrator对提交的任务进行编排,对于普通的hive和Spark单引擎的任务主要是任务的解析、label检查和校验,对于多数据源混算的场景会拆分不同的任务提交给不同的数据源进行执行等
+3. 在准备阶段,编排器Orchestrator另外一个重要的事情是通过请求LinkisMaster获取用于执行任务的EngineConn。如果LinkisMaster有对应的EngineConn可以复用则直接返回,如果没有则创建EngineConn。
+4. Orchestrator拿到任务后提交给EngineConn进行执行,准备阶段结束,进入Job执行阶段
+   主要涉及的类简单说明:
 
-4. 同时,EngineConn会通过注册的多个Listener,实时监听底层计算存储引擎的执行情况。如果该计算存储引擎不支持注册Listener,则EngineConn会为计算任务启动守护线程,定时向计算存储引擎拉取执行情况。
+```
+## Entrance
+FIFOUserConsumer: 调度器的消费器,会根据标签生成不同的消费器,如IDE-hadoop、spark-2.4.3生成不同的消费器。消费提交的任务。并控制同时运行的任务个数,通过对应标签配置的并发数:wds.linkis.rm.instance
+DefaultEntranceExecutor:任务执行的入口,发起编排器的调用:callExecute
+JobReq: 编排器接受的任务对象,通过EntranceJob转换而来,主要包括代码、标签信息、参数等
+OrchestratorSession:类似于SparkSession,是编排器的入口。正常单例。
+Orchestration:JobReq被OrchestratorSession编排后的返回对象,支持执行和打印执行计划等
+OrchestrationFuture: Orchestration选择异步执行的返回,包括cancel、waitForCompleted、getResponse等常用方法
+Operation:用于扩展操作任务的接口,现在已经实现了用于获取日志的LogOperation、获取进度的ProgressOperation等
+
+## Orchestrator
+CodeLogicalUnitExecTask: 代码类型任务的执行入口,任务最终编排运行后会调用该类的execute方法,首先会向LinkisMaster请求EngineConn再提交执行
+DefaultCodeExecTaskExecutorManager:负责管控代码类型的EngineConn,包括请求和释放EngineConn
+ComputationEngineConnManager:负责LinkisMaster进行对接,请求和释放ENgineConn
+```
 
-5. EngineConn将拉取到的执行情况,通过RCP请求,实时传回Orchestrator所在的微服务。
+### 2.2.2 LinkisMaster步骤:
 
-6. 该微服务的Receiver接收到执行情况后,会通过ListenerBus进行广播,Orchestrator的Execution消费该事件并动态更新Physical树的执行情况。
+1.LinkisMaster接受到Entrance服务发出的请求EngineConn请求进行处理
+2.判断是否有对应Label可以复用的EngineConn,有则直接返回
+3.如果没有则进入创建EngineConn流程:
+- 首先通过Label选择合适的EngineConnManager服务。
+- 接着通过调用EnginePluginServer获取本次请求EngineConn的资源类型和资源使用,
+- 通过资源类型和资源,判断对应的Label是否还有资源,如果有则进入创建,否则抛出重试异常
+- 请求第一步的EngineConnManager启动EngineConn
+- 等待EngineConn空闲,返回创建的EngineConn,否则判断异常是否可以重试
 
-7. 计算任务所产生的结果集,会在EngineConn端就写入到HDFS等存储介质之中。EngineConn通过RPC传回的只是结果集路径,Execution消费事件,并将获取到的结果集路径通过ListenerBus进行广播,使Entrance向Orchestrator注册的Listener能消费到该结果集路径,并将结果集路径写入持久化到JobHistory之中。
+4.锁定创建的EngineConn返回给Entrance,注意这里为异步请求Entrance发出EC请求后会接受到对应的请求ID,LinkisMaster请求完毕后主动通过给对应的Entrance服务
 
-8. EngineConn端的计算任务执行完成后,通过同样的逻辑,会触发Execution更新Physical树该ExecTask节点的状态,使得Physical树继续往上执行,直到整棵树全部执行完成。这时Execution会通过ListenerBus广播计算任务执行完成的状态。
+主要涉及的类简单说明:
+```
+## LinkisMaster
+EngineAskEngineService: LinkisMaster负责处理引擎请求的处理类,主要逻辑通过调用EngineReuseService判断是否有EngineConn可以复用,否则调用EngineCreateService创建EngineConn
+EngineCreateService:负责创建EngineConn,主要几个步骤:
 
-9. Entrance向Orchestrator注册的Listener消费到该状态事件后,向JobHistory更新Job的状态,整个任务执行完成。
 
-----
+## LinkisEnginePluginServer
+EngineConnLaunchService:提供ECM获取对应引擎类型EngineConn的启动信息
+EngineConnResourceFactoryService:提供给LinkisMaster获取对应本次任务所需要启动EngineConn需要的资源
+EngineConnResourceService: 负责管理引擎物料,包括刷新和刷新所有
 
-最后,我们再来看下Client端是如何得知计算任务状态变化,并及时获取到计算结果的,具体如下图所示:
+## EngineConnManager
+AbstractEngineConnLaunchService:负责启动接受LinkisMaster请求启动EngineConn的请求,并完成EngineConn引擎的启动
+ECMHook: 用于处理EngineConn启动前后的前置后置操作,如hive UDF Jar加入到EngineConn启动的classPath中
+```
 
-![结果获取流程](/Images-zh/Architecture/Job提交准备执行流程/结果获取流程.png)
 
-具体流程如下:
+这里需要说明的是如果用户存在一个可用空闲的引擎,则会跳过1,2,3,4 四个步骤;
 
-1. Client端定时轮询请求Entrance,获取计算任务的状态。
+### 2.3 Job执行阶段
+&nbsp;&nbsp;&nbsp;&nbsp;当Entrance服务中的编排器拿到EngineConn后就进入了执行阶段,CodeLogicalUnitExecTask会将任务提交给EngineConn进行执行,EngineConn会通过对应的CodeLanguageLabel创建不同的执行器进行执行。主要步骤如下:
+1. CodeLogicalUnitExecTask通过RPC提交任务给到EngineConn
+2. EngineConn判断是否有对应的CodeLanguageLabel的执行器,如果没有则创建
+3. 提交给Executor进行执行,通过链接到具体底层的引擎执行进行执行,如Spark通过sparkSession提交sql、pyspark、scala任务
+4. 任务状态流转实时推送给Entrance服务
+5. 通过实现log4jAppender,SendAppender通过RPC将日志推送给Entrance服务
+6. 通过定时任务实时推送任务进度和资源信息给到Entrance
 
-2. 一旦发现状态翻转为成功,则向JobHistory发送获取Job信息的请求,拿到所有的结果集路径
+主要涉及的类简单说明:
+```
+ComputationTaskExecutionReceiver:Entrance服务端编排器用于接收EngineConn所有RPC请求的服务类,负责接收进度、日志、状态、结果集在通过ListenerBus的模式推送给上次调用方
+TaskExecutionServiceImpl:EngineConn接收Entrance所有RPC请求的服务类,包括任务执行、状态查询、任务Kill等
+ComputationExecutor:具体的任务执行父类,比如Spark分为SQL/Python/Scala Executor
+ComputationExecutorHook: 用于Executor创建前后的Hook,比如初始化UDF、执行默认的UseDB等
+EngineConnSyncListener: ResultSetListener/TaskProgressListener/TaskStatusListener 用于监听Executor执行任务过程中的进度、结果集、和进度等信息
+SendAppender: 负责推送EngineConn端的日志给到Entrance
+```
+### 2.4 Job结果推送阶段
+&nbsp;&nbsp;&nbsp;&nbsp;该阶段比较简单,主要用于将任务在EngineConn产生的结果集返回给Client,主要步骤如下:
+1. 首先在EngineConn执行任务过程中会进行结果集写入,写入到文件系统中获取到对应路径。当然也支持内存缓存,默认写文件
+2. EngineConn将对应的结果集路径和结果集个数返回给Entrance
+3. Entrance调用JobHistory将结果集路径信息更新到任务表中
+4. Client通过任务信息获取到结果集路径并读取结果集
+   主要涉及的类简单说明:
+```
+EngineExecutionContext:负责创建结果集和推送结果集给到Entrance服务
+ResultSetWriter:负责写结果集到文件系统中支持linkis-storage支持的文件系统,现在以及支持本地和HDFS。支持的结果集类型,表格、文本、HTML、图片等
+JobHistory:存储有任务的所有信息包括状态、结果路径、指标信息等对应DB中的实体类
+ResultSetReader:读取结果集的关键类
+```
 
-3. 通过结果集路径向PublicService发起查询文件内容的请求,获取到结果集的内容。
+## 三、总结
+&nbsp;&nbsp;&nbsp;&nbsp;上面我们主要介绍了Linkis计算治理服务组CGS,的OLAP任务的整个执行流程,按照任务请求的处理过程对任务拆分成了提交、准备、执行、返回结果四个阶段。CGS主要就是遵循这4个阶段来设计实现的,服务于这4个阶段,且为每个阶段提供了强大和灵活的能力。在提交阶段,主要提供通用的接口,接收上层应用工具提交过来的任务,并能提供基础的解析和拦截能力;在准备阶段,主要通过编排器Orchestrator,和LinkisMaster完成对任务的解析编排,并且做资源控制,和完成EngineConn的创建;执行阶段,通过引擎连接器EngineConn来去实际完成和底层引擎的对接,通常每个用户要连接不同的底层引擎,就得先启动一个对应的底层引擎连接器EC。计算任务通过EC,来提交给底层引擎做实际的执行,和状态、日志、结果等信息的获取,及;在结果返回阶
 段,返回任务执行的结果信息,支持按照多种返回模式,如:文件流、JSON、JDBC等。整体的时序图如下:
 
-自此,整个Job的提交 -> 准备 -> 执行 三个阶段全部完成。
+![time](/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_time.png)
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation-governance-services/linkis-manager/ec-history-arc.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation-governance-services/linkis-manager/ec-history-arc.md
new file mode 100644
index 0000000000..0a0eb7ba86
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation-governance-services/linkis-manager/ec-history-arc.md
@@ -0,0 +1,79 @@
+---
+title: EC历史引擎信息记录架构
+sidebar_position: 3
+---
+
+## 1. 总述
+### 需求背景
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;现在LinkisManager只记录了在运行中的EengineConn的信息和资源使用,但是在任务结束后这些信息就丢失了。如果需要做历史EC的一些统计和查看,或者要去查看已经结束的EC的日志都不太方便。对于历史EC的记录显得较为重要。
+### 目标
+1. 完成EC信息和资源信息持久化到DB的存储
+2. 支持通过restful完成历史EC信息的查看和搜索
+3. 支持查看已经结束EC的日志
+
+## 2. 总体设计
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;此次特性新增主要修改为LinkisManager下的RM和AM模块,并新增了一种信息记录表。下面将详细介绍。
+
+### 2.1 技术架构
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;因为此次的实现需要记录EC的信息和资源信息,而资源信息分为请求资源、真实使用资源、释放资源等三个概念,而且都需要进行记录。所以此次实现为基于EC在ResourceManager的生命周期去进行实现,在EC完成以上三个阶段时,都加上EC信息的更新操作。整体如下图所示
+![arc](/Images/Architecture/LinkisManager/ecHistoryArc.png)
+
+### 2.2 业务架构
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;此次的特性主要是为了完成历史EC的信息记录和支持历史技术EC的日志查看。功能点设计的模块如下:
+
+| 组件名| 一级模块 | 二级模块 | 功能点 |
+|---|---|---|---|
+| Linkis | LinkisManager | ResourceManager| 在EC请求资源、上报使用资源、释放资源时完成EC信息的记录|
+| Linkis | LinkisManager | AppManager| 提供list和搜索所有历史EC信息的接口|
+
+## 3. 模块设计
+### 核心执行流程
+[输入端] 输入端主要为创建引擎时请求资源、引擎启动后上报真实使用资源、引擎退出时释放资源时输入的信息,主要包括请求的label、资源、ec唯一的ticketid、资源类型.
+[处理流程] 信息记录service,对输入的数据进行处理,通过标签会解析出对应的引擎信息、用户、creator,以及日志路径。通过资源类型确认是资源请求、使用、释放。接着讲这些信息存储到DB中。
+调用时序图如下:
+![Time](/Images/Architecture/LinkisManager/ecHistoryTime.png)
+
+
+## 4. 数据结构:
+```sql
+# EC信息资源记录表
+DROP TABLE IF EXISTS `linkis_cg_ec_resource_info_record`;
+CREATE TABLE `linkis_cg_ec_resource_info_record` (
+    `id` INT(20) NOT NULL AUTO_INCREMENT,
+    `label_value` VARCHAR(255) NOT NULL COMMENT 'ec labels stringValue',
+    `create_user` VARCHAR(128) NOT NULL COMMENT 'ec create user',
+    `service_instance` varchar(128) COLLATE utf8_bin DEFAULT NULL COMMENT 'ec instance info',
+    `ecm_instance` varchar(128) COLLATE utf8_bin DEFAULT NULL COMMENT 'ecm instance info ',
+    `ticket_id` VARCHAR(100) NOT NULL COMMENT 'ec ticket id',
+    `log_dir_suffix` varchar(128) COLLATE utf8_bin DEFAULT NULL COMMENT 'log path',
+    `request_times` INT(8) COMMENT 'resource request times',
+    `request_resource` VARCHAR(255) COMMENT 'request resource',
+    `used_times` INT(8) COMMENT 'resource used times',
+    `used_resource` VARCHAR(255) COMMENT 'used resource',
+    `release_times` INT(8) COMMENT 'resource released times',
+    `released_resource` VARCHAR(255)  COMMENT 'released resource',
+    `release_time` datetime DEFAULT NULL COMMENT 'released time',
+    `used_time` datetime DEFAULT NULL COMMENT 'used time',
+    `create_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT 'create time',
+    PRIMARY KEY (`id`),
+    KEY (`ticket_id`),
+    UNIQUE KEY `label_value_ticket_id` (`ticket_id`,`label_value`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
+```
+## 5. 接口设计:
+引擎历史管理页面API接口,参考文档  管理台添加历史引擎页面  
+https://linkis.apache.org/docs/latest/api/http/linkis-cg-linkismanager-api/ec-resource-management-api
+## 6. 非功能性设计:
+
+### 6.1 安全
+不涉及安全问题,restful需要登录认证
+
+### 6.2 性能
+对引擎生命周期性能影响较小
+
+### 6.3 容量
+需要定期进行清理
+
+### 6.4 高可用
+不涉及
+
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation-governance-services/linkis-manager/label-manager.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation-governance-services/linkis-manager/label-manager.md
index 5b87b48737..92b8392ab6 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation-governance-services/linkis-manager/label-manager.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation-governance-services/linkis-manager/label-manager.md
@@ -1,6 +1,6 @@
 ---
 title: LabelManager 架构
-sidebar_position: 2
+sidebar_position: 3
 ---
 
 #### 简述
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation-governance-services/linkis-manager/resource-manager.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation-governance-services/linkis-manager/resource-manager.md
index 5d38b7fe30..839e47bd8c 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation-governance-services/linkis-manager/resource-manager.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation-governance-services/linkis-manager/resource-manager.md
@@ -1,6 +1,6 @@
 ---
 title: ResourceManager 架构
-sidebar_position: 3
+sidebar_position: 2
 ---
 
 ResourceManager(简称RM),是Linkis的计算资源管理模块,所有的EngineConn(简称EC)、EngineConnManager(简称ECM),甚至包括Yarn在内的外部资源,都由RM负责统筹管理。RM能够基于用户、ECM或其它通过复杂标签定义的粒度对资源进行管控。
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/microservice-governance-services/service_isolation.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/microservice-governance-services/service_isolation.md
new file mode 100644
index 0000000000..e64026eaa8
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/microservice-governance-services/service_isolation.md
@@ -0,0 +1,197 @@
+---
+title: 微服务租户隔离架构设计
+sidebar_position: 2
+---
+
+## 1. 总述
+### 需求背景
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;现在Linkis在Gateway进行服务转发时现在是基于ribbon进行负载均衡的,但是有些情况下存在一些重要业务的任务希望做到服务级别的隔离,如果基于ribbon进行服务在均衡就会存在问题。比如租户A希望他的任务都路由到特定的Linkis-CG-Entrance服务,这样当其他的实例出现异常时可以不会影响到A服务的Entrance。
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;另外支持服务的租户及隔离也可以做到快速隔离某个异常服务,支持灰度升级等场景。
+
+### 目标
+1. 支持通过解析请求的标签按照路由标签对服务进行转发
+2. 支持服务的标签注册和修改
+
+## 2. 总体设计
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;此次特性新增主要修改点位linkis-mg-gateway和instance-label两个模块,设计到新增Gateway的转发逻辑,以及instance-label支持服务和标签的注册。
+
+### 2.1 技术架构
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;整体技术架构主要修改点位RestFul请求需要带上路由标签等标签参数信息,然后在Gateway进行转发时会解析对应的标签完成接口的路由转发。整体如下图所示
+![arc](/Images/Architecture/Gateway/service_isolation_arc.png)
+
+几点说明:
+1. 如果存在多个对应的服务打上了同一个roteLabel则随机转发
+2. 如果对应的routeLabel没有对应的服务,则接口直接失败
+3. 如果接口没有routeLabel则基于原有的转发逻辑,不会路由到打上了特定标签的服务
+
+### 2.2 业务架构
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;此次的特性主要是为了完成Restful租户隔离转发功能。功能点设计的模块如下:
+
+| 组件名| 一级模块 | 二级模块 | 功能点 |
+|---|---|---|---|
+| Linkis | MG | Gateway| 解析restful请求参数中的路由标签,完成接口按照路由标签的转发功能|
+| Linkis | PS | InstanceLabel| InstanceLabel服务,完成服务和标签的关联|
+
+## 3. 模块设计
+### 核心执行流程
+[输入端] 输入端为请求Gatway的restful请求,且是参数中待用roure label的请求才会进行处理
+[处理流程] Gateway会判断请求是否带有对应的RouteLabel,如果存在则基于RouteLabel来进行转发。
+调用时序图如下:
+
+![Time](/Images/Architecture/Gateway/service_isolation_time.png)
+
+
+
+## 4. 数据结构:
+```sql
+DROP TABLE IF EXISTS `linkis_ps_instance_label`;
+CREATE TABLE `linkis_ps_instance_label` (
+  `id` int(20) NOT NULL AUTO_INCREMENT,
+  `label_key` varchar(32) COLLATE utf8_bin NOT NULL COMMENT 'string key',
+  `label_value` varchar(255) COLLATE utf8_bin NOT NULL COMMENT 'string value',
+  `label_feature` varchar(16) COLLATE utf8_bin NOT NULL COMMENT 'store the feature of label, but it may be redundant',
+  `label_value_size` int(20) NOT NULL COMMENT 'size of key -> value map',
+  `update_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'update unix timestamp',
+  `create_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'update unix timestamp',
+  PRIMARY KEY (`id`),
+  UNIQUE KEY `label_key_value` (`label_key`,`label_value`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
+
+DROP TABLE IF EXISTS `linkis_ps_instance_info`;
+CREATE TABLE `linkis_ps_instance_info` (
+  `id` int(11) NOT NULL AUTO_INCREMENT,
+  `instance` varchar(128) COLLATE utf8_bin DEFAULT NULL COMMENT 'structure like ${host|machine}:${port}',
+  `name` varchar(128) COLLATE utf8_bin DEFAULT NULL COMMENT 'equal application name in registry',
+  `update_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT 'update unix timestamp',
+  `create_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT 'create unix timestamp',
+  PRIMARY KEY (`id`),
+  UNIQUE KEY `instance` (`instance`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
+
+DROP TABLE IF EXISTS `linkis_ps_instance_label_relation`;
+CREATE TABLE `linkis_ps_instance_label_relation` (
+  `id` int(20) NOT NULL AUTO_INCREMENT,
+  `label_id` int(20) DEFAULT NULL COMMENT 'id reference linkis_ps_instance_label -> id',
+  `service_instance` varchar(128) NOT NULL COLLATE utf8_bin COMMENT 'structure like ${host|machine}:${port}',
+  `update_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT 'update unix timestamp',
+  `create_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT 'create unix timestamp',
+  PRIMARY KEY (`id`)
+) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
+
+```
+## 5. 如何使用:
+
+### add route label for entrance
+```
+echo "spring.eureka.instance.metadata-map.route=et1" >> $LINKIS_CONF_DIR/linkis-cg-entrance.properties 
+sh  $LINKIS_HOME/sbin/linkis-damemon.sh restart cg-entrance
+```
+![Time](/Images/Architecture/Gateway/service_isolation_time.png)
+
+### Use route label
+submit task:
+```
+url:/api/v1/entrance/submit
+{
+    "executionContent": {"code": "echo 1", "runType":  "shell"},
+    "params": {"variable": {}, "configuration": {}},
+    "source":  {"scriptPath": "ip"},
+    "labels": {
+        "engineType": "shell-1",
+        "userCreator": "peacewong-IDE",
+        "route": "et1"
+    }
+}
+```
+will be routed to a fixed service:
+```
+{
+    "method": "/api/entrance/submit",
+    "status": 0,
+    "message": "OK",
+    "data": {
+        "taskID": 45158,
+        "execID": "exec_id018030linkis-cg-entrancelocalhost:9205IDE_peacewong_shell_0"
+    }
+}
+```
+
+or linkis-cli:
+
+```
+sh bin/linkis-cli -submitUser  hadoop  -engineType shell-1 -codeType shell  -code "whoami" -labelMap route=et1 --gatewayUrl http://127.0.0.1:9101
+```
+
+### Use non-existing label
+submit task:
+```
+url:/api/v1/entrance/submit
+{
+    "executionContent": {"code": "echo 1", "runType":  "shell"},
+    "params": {"variable": {}, "configuration": {}},
+    "source":  {"scriptPath": "ip"},
+    "labels": {
+        "engineType": "shell-1",
+        "userCreator": "peacewong-IDE",
+        "route": "et1"
+    }
+}
+```
+will get the error
+```
+{
+    "method": "/api/rest_j/v1/entrance/submit",
+    "status": 1,
+    "message": "GatewayErrorException: errCode: 11011 ,desc: Cannot route to the corresponding service, URL: /api/rest_j/v1/entrance/submit RouteLabel: [{\"stringValue\":\"et2\",\"labelKey\":\"route\",\"feature\":null,\"modifiable\":true,\"featureKey\":\"feature\",\"empty\":false}] ,ip: localhost ,port: 9101 ,serviceKind: linkis-mg-gateway",
+    "data": {
+        "data": "{\r\n    \"executionContent\": {\"code\": \"echo 1\", \"runType\":  \"shell\"},\r\n    \"params\": {\"variable\": {}, \"configuration\": {}},\r\n    \"source\":  {\"scriptPath\": \"ip\"},\r\n    \"labels\": {\r\n        \"engineType\": \"shell-1\",\r\n        \"userCreator\": \"peacewong-IDE\",\r\n        \"route\": \"et2\"\r\n    }\r\n}"
+    }
+}
+```
+
+### without label
+submit task:
+```
+url:/api/v1/entrance/submit
+{
+    "executionContent": {"code": "echo 1", "runType":  "shell"},
+    "params": {"variable": {}, "configuration": {}},
+    "source":  {"scriptPath": "ip"},
+    "labels": {
+        "engineType": "shell-1",
+        "userCreator": "peacewong-IDE"
+    }
+}
+```
+
+```
+
+will route to untagged entranceservices
+{
+    "method": "/api/entrance/submit",
+    "status": 0,
+    "message": "OK",
+    "data": {
+        "taskID": 45159,
+        "execID": "exec_id018018linkis-cg-entrancelocalhost2:9205IDE_peacewong_shell_0"
+    }
+}
+
+```
+
+## 6. 非功能性设计:
+
+### 6.1 安全
+不涉及安全问题,restful需要登录认证
+
+### 6.2 性能
+对Gateway转发性能影响较小,有缓存相应的label和instance的数据
+
+### 6.3 容量
+不涉及
+
+### 6.4 高可用
+不涉及
+
+
+
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/public-enhancement-services/context-service/content-service-cleanup.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/public-enhancement-services/context-service/content-service-cleanup.md
index a7927748d9..31e3394b51 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/public-enhancement-services/context-service/content-service-cleanup.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/public-enhancement-services/context-service/content-service-cleanup.md
@@ -1,6 +1,6 @@
 ---
 title: CS 清理接口特性
-sidebar_position: 7
+sidebar_position: 8
 tags: [Feature]
 ---
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/public-enhancement-services/context-service/context-service-cache.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/public-enhancement-services/context-service/context-service-cache.md
index 97f0ebaae0..b53b01fa24 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/public-enhancement-services/context-service/context-service-cache.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/public-enhancement-services/context-service/context-service-cache.md
@@ -1,6 +1,6 @@
 ---
 title: CS Cache 架构
-sidebar_position: 1
+sidebar_position: 7
 ---
 
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/public-enhancement-services/context-service/context-service.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/public-enhancement-services/context-service/context-service.md
index 9da7b0dd2d..628b09a294 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/public-enhancement-services/context-service/context-service.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/public-enhancement-services/context-service/context-service.md
@@ -1,6 +1,6 @@
 ---
 title: CS 架构
-sidebar_position: 0.5
+sidebar_position: 1
 ---
 
 ## **ContextService架构**
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/release.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/release.md
index d5c6a168fa..42a742b2b2 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/release.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/release.md
@@ -7,6 +7,8 @@ sidebar_position: 0.1
 - [集成 Knife4j 和启用](/deployment/involve-knife4j-into-linkis.md)
 - [数据源功能模块接口优化](/api/http/linkis-ps-publicservice-api/metadataquery-api.md)
 - [JDBC 引擎支持数据源模式](/engine-usage/jdbc.md)
+- [EC历史引擎信息记录架构设计](/architecture/computation-governance-services/linkis-manager/ec-history-arc.md)
+- [微服务租户隔离架构设计](/architecture/microservice-governance-services/service_isolation.md)
 - [版本的 Release-Notes](/download/release-notes-1.2.0)
 
 ## 参数变化 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/tuning-and-troubleshooting/error-guide/interface.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/tuning-and-troubleshooting/error-guide/interface.md
index f2a1f53373..91ae870fbc 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/tuning-and-troubleshooting/error-guide/interface.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/tuning-and-troubleshooting/error-guide/interface.md
@@ -49,60 +49,60 @@ applicationName是应用名,通过应用名查找归属的微服务,去对
 ![](/Images/tuning-and-troubleshooting/error-guide/logs.png)
 
 - cg-linkismanager:
->GC日志:` /data/bdp/logs/linkis/linkis-cg-linkismanager-gc.log`
+>GC日志:` /${LINKIS_HOME}/logs/linkis/linkis-cg-linkismanager-gc.log`
 >
->服务日志:` /data/bdp/logs/linkis/linkis-cg-linkismanager.log`
+>服务日志:` /${LINKIS_HOME}/logs/linkis/linkis-cg-linkismanager.log`
 >
->服务的System.out日志:` /data/bdp/logs/linkis/linkis-cg-linkismanager.out`
+>服务的System.out日志:` /${LINKIS_HOME}/logs/linkis/linkis-cg-linkismanager.out`
 
 - cg-engineplugin:
->GC日志:` /data/bdp/logs/linkis/linkis-cg-engineplugin-gc.log`
+>GC日志:` /${LINKIS_HOME}/logs/linkis/linkis-cg-engineplugin-gc.log`
 >
->服务日志:` /data/bdp/logs/linkis/linkis-cg-engineplugin.log`
+>服务日志:` /${LINKIS_HOME}/logs/linkis/linkis-cg-engineplugin.log`
 >
->服务的System.out日志:` /data/bdp/logs/linkis/linkis-cg-engineplugin.out`
+>服务的System.out日志:` /${LINKIS_HOME}/logs/linkis/linkis-cg-engineplugin.out`
 
 - cg-engineconnmanager:
->GC日志:` /data/bdp/logs/linkis/linkis-cg-engineconnmanager-gc.log`
+>GC日志:` /${LINKIS_HOME}/logs/linkis/linkis-cg-engineconnmanager-gc.log`
 >
->服务日志:` /data/bdp/logs/linkis/linkis-cg-engineconnmanager.log`
+>服务日志:` /${LINKIS_HOME}/logs/linkis/linkis-cg-engineconnmanager.log`
 >
->服务的System.out日志:` /data/bdp/logs/linkis/linkis-cg-engineconnmanager.out`
+>服务的System.out日志:` /${LINKIS_HOME}/logs/linkis/linkis-cg-engineconnmanager.out`
 
 - cg-entrance:
->GC日志:` /data/bdp/logs/linkis/linkis-cg-entrance-gc.log`
+>GC日志:` /${LINKIS_HOME}/logs/linkis/linkis-cg-entrance-gc.log`
 >
->服务日志:` /data/bdp/logs/linkis/linkis-cg-entrance.log`
+>服务日志:` /${LINKIS_HOME}/logs/linkis/linkis-cg-entrance.log`
 >
->服务的System.out日志:` /data/bdp/logs/linkis/linkis-cg-entrance.out`
+>服务的System.out日志:` /${LINKIS_HOME}/logs/linkis/linkis-cg-entrance.out`
 
 - ps-bml:
->GC日志:` /data/bdp/logs/linkis/linkis-ps-bml-gc.log`
+>GC日志:` /${LINKIS_HOME}/logs/linkis/linkis-ps-bml-gc.log`
 >
->服务日志:` /data/bdp/logs/linkis/linkis-ps-bml.log`
+>服务日志:` /${LINKIS_HOME}/logs/linkis/linkis-ps-bml.log`
 >
->服务的System.out日志:` /data/bdp/logs/linkis/linkis-ps-bml.out`
+>服务的System.out日志:` /${LINKIS_HOME}/logs/linkis/linkis-ps-bml.out`
 
 - ps-cs:
->GC日志:` /data/bdp/logs/linkis/linkis-ps-cs-gc.log`
+>GC日志:` /${LINKIS_HOME}/logs/linkis/linkis-ps-cs-gc.log`
 >
->服务日志:` /data/bdp/logs/linkis/linkis-ps-cs.log`
+>服务日志:` /${LINKIS_HOME}/logs/linkis/linkis-ps-cs.log`
 >
->服务的System.out日志:` /data/bdp/logs/linkis/linkis-ps-cs.out`
+>服务的System.out日志:` /${LINKIS_HOME}/logs/linkis/linkis-ps-cs.out`
 
 - ps-datasource:
->GC日志:` /data/bdp/logs/linkis/linkis-ps-datasource-gc.log`
+>GC日志:` /${LINKIS_HOME}/logs/linkis/linkis-ps-datasource-gc.log`
 >
->服务日志:` /data/bdp/logs/linkis/linkis-ps-datasource.log`
+>服务日志:` /${LINKIS_HOME}/logs/linkis/linkis-ps-datasource.log`
 >
->服务的System.out日志:` /data/bdp/logs/linkis/linkis-ps-datasource.out`
+>服务的System.out日志:` /${LINKIS_HOME}/logs/linkis/linkis-ps-datasource.out`
 
 - ps-publicservice:
->GC日志:` /data/bdp/logs/linkis/linkis-ps-publicservice-gc.log`
+>GC日志:` /${LINKIS_HOME}/logs/linkis/linkis-ps-publicservice-gc.log`
 >
->服务日志:` /data/bdp/logs/linkis/linkis-ps-publicservice.log`
+>服务日志:` /${LINKIS_HOME}/logs/linkis/linkis-ps-publicservice.log`
 >
->服务的System.out日志:` /data/bdp/logs/linkis/linkis-ps-publicservice.out`
+>服务的System.out日志:` /${LINKIS_HOME}/logs/linkis/linkis-ps-publicservice.out`
 
 ###  4. 查看日志
 展示接口对应的报错信息
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/tuning-and-troubleshooting/overview.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/tuning-and-troubleshooting/overview.md
index 7b70ee524e..d9ffc57129 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/tuning-and-troubleshooting/overview.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/tuning-and-troubleshooting/overview.md
@@ -124,7 +124,7 @@ less log/* |grep -5n exception(或则less log/* |grep -5n ERROR)
 
 #### 3.4.2 执行引擎任务的异常排查
 
-** step1:找到引擎的启动部署目录 **  
+** step1:找到EngineConn的启动部署目录 **  
 
 - 方式1:如果执行日志中有显示,可以在管理台上查看到 如下图:        
 ![engine-log](https://user-images.githubusercontent.com/29391030/156343802-9d47fa98-dc70-4206-b07f-df439b291028.png)
@@ -133,7 +133,10 @@ less log/* |grep -5n exception(或则less log/* |grep -5n ERROR)
 ```shell script
 # 如果不清楚taskid,可以按时间排序后进行选择 ll -rt /appcom/tmp/${执行的用户}/${日期}/${引擎}/  
 cd /appcom/tmp/${执行的用户}/${日期}/${引擎}/${taskId}  
+例如一个Spark 引擎的启动目录如下:
+/appcom/tmp/hadoop/20210824/spark/6a09d5fb-81dd-41af-a58b-9cb5d5d81b5a
 ```
+
 目录大体如下 
 ```shell script
 conf -> /appcom/tmp/engineConnPublickDir/6a09d5fb-81dd-41af-a58b-9cb5d5d81b5a/v000002/conf #引擎的配置文件  
@@ -145,6 +148,7 @@ logs #引擎启动执行的相关日志
 ** step2:查看引擎的日志 **
 ```shell script
 less logs/stdout  
+less logs/stderr
 ```
 
 ** step3:尝试手动执行脚本(如果需要) **  
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/sdk-manual.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/sdk-manual.md
index 3a7db6550d..4215d7b56d 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/sdk-manual.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/sdk-manual.md
@@ -53,35 +53,34 @@ public class LinkisClientTest {
             .discoveryFrequency(1, TimeUnit.MINUTES)  // discovery frequency
             .loadbalancerEnabled(true)  // enable loadbalance
             .maxConnectionSize(5)   // set max Connection
-            .retryEnabled(false) // set retry
+            .retryEnabled(true) // set retry
             .readTimeout(30000)  //set read timeout
             .setAuthenticationStrategy(new StaticAuthenticationStrategy())   //AuthenticationStrategy Linkis authen suppory static and Token
             .setAuthTokenKey("hadoop")  // set submit user
-            .setAuthTokenValue("hadoop")))  // set passwd or token (setAuthTokenValue("test"))
+            .setAuthTokenValue("123456")))  // set passwd or token (setAuthTokenValue("test"))
             .setDWSVersion("v1") //linkis rest version v1
             .build();
 
     // 2. new Client(Linkis Client) by clientConfig
     private static UJESClient client = new UJESClientImpl(clientConfig);
 
-    public static void main(String[] args){
+    public static void main(String[] args) {
 
-        String user = "hadoop"; // execute user
+        String user = "hadoop"; // 用户需要和AuthTokenKey的值保持一致
         String executeCode = "df=spark.sql(\"show tables\")\n" +
                 "show(df)"; // code support:sql/hql/py/scala
         try {
 
             System.out.println("user : " + user + ", code : [" + executeCode + "]");
-            // 3.推荐用submit的方式,可以指定任务相关的label支持更多特性
+            // 3. build job and execute
             JobExecuteResult jobExecuteResult = toSubmit(user, executeCode);
-            //0.x兼容的方式,不推荐使用:JobExecuteResult jobExecuteResult = toExecute(user, executeCode);
             System.out.println("execId: " + jobExecuteResult.getExecID() + ", taskId: " + jobExecuteResult.taskID());
             // 4. get job jonfo
             JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
             int sleepTimeMills = 1000;
             int logFromLen = 0;
             int logSize = 100;
-            while(!jobInfoResult.isCompleted()) {
+            while (!jobInfoResult.isCompleted()) {
                 // 5. get progress and log
                 JobProgressResult progress = client.progress(jobExecuteResult);
                 System.out.println("progress: " + progress.getProgress());
@@ -98,25 +97,24 @@ public class LinkisClientTest {
             // multiple result sets will be generated)
             String resultSet = jobInfo.getResultSetList(client)[0];
             // 7. get resultContent
-            Object fileContents = client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build()).getFileContent();
-            System.out.println("res: " + fileContents);
+            ResultSetResult resultSetResult = client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build());
+            System.out.println("metadata: " + resultSetResult.getMetadata()); // column name type
+            System.out.println("res: " + resultSetResult.getFileContent()); //row data
         } catch (Exception e) {
-            e.printStackTrace();
+            e.printStackTrace();// please use log
             IOUtils.closeQuietly(client);
         }
         IOUtils.closeQuietly(client);
     }
 
-    /**
-     * Linkis 1.0 recommends the use of Submit method
-     */
+
     private static JobExecuteResult toSubmit(String user, String code) {
         // 1. build  params
         // set label map :EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
         Map<String, Object> labels = new HashMap<String, Object>();
         labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // required engineType Label
-        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-APPName");// 请求的用户和应用名,两个参数都不能少,其中APPName不能带有"-"建议替换为"_"
-        labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // 指定脚本类型
+        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-APPName");// required execute user and creator eg:hadoop-IDE
+        labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // required codeType
         // set start up map :engineConn start params
         Map<String, Object> startupMap = new HashMap<String, Object>(16);
         // Support setting engine native parameters,For example: parameters of engines such as spark/hive
@@ -135,75 +133,42 @@ public class LinkisClientTest {
         // 3. to execute
         return client.submit(jobSubmitAction);
     }
-
-    /**
-     * Compatible with 0.X execution mode
-     */
-    private static JobExecuteResult toExecute(String user, String code) {
-        // 1. build  params
-        // set label map :EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
-        Map<String, Object> labels = new HashMap<String, Object>();
-        // labels.put(LabelKeyConstant.TENANT_KEY, "fate");
-        // set start up map :engineConn start params
-        Map<String, Object> startupMap = new HashMap<String, Object>(16);
-        // Support setting engine native parameters,For example: parameters of engines such as spark/hive
-        startupMap.put("spark.executor.instances", 2);
-        // setting linkis params
-        startupMap.put("wds.linkis.rm.yarnqueue", "dws");
-
-        // 2. build JobExecuteAction (0.X old way of using)
-        JobExecuteAction executionAction = JobExecuteAction.builder()
-                .setCreator("AppName")  //creator, the system name of the client requesting linkis, used for system-level isolation
-                .addExecuteCode(code)   //Execution Code
-                .setEngineTypeStr("spark") // engineConn type
-                .setRunTypeStr("py") // code type
-                .setUser(user)   //execute user
-                .setStartupParams(startupMap) // start up params
-                .build();
-        executionAction.addRequestPayload(TaskConstant.LABELS, labels);
-        String body = executionAction.getRequestPayload();
-        System.out.println(body);
-
-        // 3. to execute
-        return client.execute(executionAction);
-    }
 }
 ```
 
 运行上述的代码即可以完成任务提交/执行/日志/结果集获取等
 
 ## 3. Scala测试代码:
+
 ```scala
 package org.apache.linkis.client.test
 
-import java.util
-import java.util.concurrent.TimeUnit
-
+import org.apache.commons.io.IOUtils
+import org.apache.commons.lang3.StringUtils
 import org.apache.linkis.common.utils.Utils
 import org.apache.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy
 import org.apache.linkis.httpclient.dws.config.DWSClientConfigBuilder
 import org.apache.linkis.manager.label.constant.LabelKeyConstant
-import org.apache.linkis.protocol.constants.TaskConstant
-import org.apache.linkis.ujes.client.UJESClient
 import org.apache.linkis.ujes.client.request._
 import org.apache.linkis.ujes.client.response._
-import org.apache.commons.io.IOUtils
-import org.apache.commons.lang.StringUtils
+
+import java.util
+import java.util.concurrent.TimeUnit
 
 object LinkisClientTest {
-// 1. build config: linkis gateway url
+  // 1. build config: linkis gateway url
   val clientConfig = DWSClientConfigBuilder.newBuilder()
-    .addServerUrl("http://127.0.0.1:9001/")   //set linkis-mg-gateway url: http://{ip}:{port}
-    .connectionTimeout(30000)   //connectionTimeOut
+    .addServerUrl("http://127.0.0.1:9001/") //set linkis-mg-gateway url: http://{ip}:{port}
+    .connectionTimeout(30000) //connectionTimeOut
     .discoveryEnabled(false) //disable discovery
-    .discoveryFrequency(1, TimeUnit.MINUTES)  // discovery frequency
-    .loadbalancerEnabled(true)  // enable loadbalance
-    .maxConnectionSize(5)   // set max Connection
+    .discoveryFrequency(1, TimeUnit.MINUTES) // discovery frequency
+    .loadbalancerEnabled(true) // enable loadbalance
+    .maxConnectionSize(5) // set max Connection
     .retryEnabled(false) // set retry
-    .readTimeout(30000)  //set read timeout
-    .setAuthenticationStrategy(new StaticAuthenticationStrategy())   //AuthenticationStrategy Linkis authen suppory static and Token
-    .setAuthTokenKey("hadoop")  // set submit user
-    .setAuthTokenValue("hadoop")  // set passwd or token (setAuthTokenValue("BML-AUTH"))
+    .readTimeout(30000) //set read timeout
+    .setAuthenticationStrategy(new StaticAuthenticationStrategy()) //AuthenticationStrategy Linkis authen suppory static and Token
+    .setAuthTokenKey("hadoop") // set submit user
+    .setAuthTokenValue("hadoop") // set passwd or token (setAuthTokenValue("BML-AUTH"))
     .setDWSVersion("v1") //linkis rest version v1
     .build();
 
@@ -211,26 +176,25 @@ object LinkisClientTest {
   val client = UJESClient(clientConfig)
 
   def main(args: Array[String]): Unit = {
-    val user = "hadoop" // execute user
+    val user = "hadoop" // execute user 用户需要和AuthTokenKey的值保持一致
     val executeCode = "df=spark.sql(\"show tables\")\n" +
       "show(df)"; // code support:sql/hql/py/scala
     try {
       // 3. build job and execute
       println("user : " + user + ", code : [" + executeCode + "]")
-      //推荐使用submit,支持传递任务label
+      // 推荐使用submit,支持传递任务label
       val jobExecuteResult = toSubmit(user, executeCode)
-      //0.X: val jobExecuteResult = toExecute(user, executeCode)
       println("execId: " + jobExecuteResult.getExecID + ", taskId: " + jobExecuteResult.taskID)
       // 4. get job jonfo
       var jobInfoResult = client.getJobInfo(jobExecuteResult)
       var logFromLen = 0
       val logSize = 100
-      val sleepTimeMills : Int = 1000
+      val sleepTimeMills: Int = 1000
       while (!jobInfoResult.isCompleted) {
         // 5. get progress and log
         val progress = client.progress(jobExecuteResult)
-       println("progress: " + progress.getProgress)
-        val logObj = client .log(jobExecuteResult, logFromLen, logSize)
+        println("progress: " + progress.getProgress)
+        val logObj = client.log(jobExecuteResult, logFromLen, logSize)
         logFromLen = logObj.fromLine
         val logArray = logObj.getLog
         // 0: info 1: warn 2: error 3: all
@@ -253,26 +217,24 @@ object LinkisClientTest {
       resultSetList.foreach(println)
       val oneResultSet = jobInfo.getResultSetList(client).head
       // 7. get resultContent
-      val fileContents = client.resultSet(ResultSetAction.builder().setPath(oneResultSet).setUser(jobExecuteResult.getUser).build()).getFileContent
-      println("First fileContents: ")
-      println(fileContents)
+      val resultSetResult: ResultSetResult = client.resultSet(ResultSetAction.builder.setPath(oneResultSet).setUser(jobExecuteResult.getUser).build)
+      println("metadata: " + resultSetResult.getMetadata) // column name type
+      println("res: " + resultSetResult.getFileContent) //row data
     } catch {
       case e: Exception => {
-        e.printStackTrace()
+        e.printStackTrace() //please use log
       }
     }
     IOUtils.closeQuietly(client)
   }
 
-  /**
-   * Linkis 1.0 recommends the use of Submit method
-   */
+
   def toSubmit(user: String, code: String): JobExecuteResult = {
     // 1. build  params
     // set label map :EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
     val labels: util.Map[String, Any] = new util.HashMap[String, Any]
     labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // required engineType Label
-    labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-APPName");// 请求的用户和应用名,两个参数都不能少,其中APPName不能带有"-"建议替换为"_"
+    labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-APPName"); // 请求的用户和应用名,两个参数都不能少,其中APPName不能带有"-"建议替换为"_"
     labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // 指定脚本类型
 
     val startupMap = new java.util.HashMap[String, Any]()
@@ -291,36 +253,5 @@ object LinkisClientTest {
     // 3. to execute
     client.submit(jobSubmitAction)
   }
-
-
-  /**
-   * Compatible with 0.X execution mode
-   */
-  def toExecute(user: String, code: String): JobExecuteResult = {
-    // 1. build  params
-    // set label map :EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
-    val labels = new util.HashMap[String, Any]
-    // labels.put(LabelKeyConstant.TENANT_KEY, "fate");
-
-    val startupMap = new java.util.HashMap[String, Any]()
-    // Support setting engine native parameters,For example: parameters of engines such as spark/hive
-    startupMap.put("spark.executor.instances", 2)
-    // setting linkis params
-    startupMap.put("wds.linkis.rm.yarnqueue", "default")
-    // 2. build JobExecuteAction (0.X old way of using)
-    val  executionAction = JobExecuteAction.builder()
-      .setCreator("APPName")  //creator, the system name of the client requesting linkis, used for system-level isolation
-      .addExecuteCode(code)   //Execution Code
-      .setEngineTypeStr("spark") // engineConn type
-      .setRunTypeStr("py") // code type
-      .setUser(user)   //execute user
-      .setStartupParams(startupMap) // start up params
-      .build();
-    executionAction.addRequestPayload(TaskConstant.LABELS, labels);
-    // 3. to execute
-    client.execute(executionAction)
-  }
-
-
 }
 ```
diff --git a/static/Images/Architecture/Gateway/service_isolation_arc.png b/static/Images/Architecture/Gateway/service_isolation_arc.png
new file mode 100644
index 0000000000..4a1c99f759
Binary files /dev/null and b/static/Images/Architecture/Gateway/service_isolation_arc.png differ
diff --git a/static/Images/Architecture/Gateway/service_isolation_console.png b/static/Images/Architecture/Gateway/service_isolation_console.png
new file mode 100644
index 0000000000..8c6a6010b6
Binary files /dev/null and b/static/Images/Architecture/Gateway/service_isolation_console.png differ
diff --git a/static/Images/Architecture/Gateway/service_isolation_time.png b/static/Images/Architecture/Gateway/service_isolation_time.png
new file mode 100644
index 0000000000..6195479752
Binary files /dev/null and b/static/Images/Architecture/Gateway/service_isolation_time.png differ
diff --git a/static/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_arc.png b/static/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_arc.png
new file mode 100644
index 0000000000..a29877a0ec
Binary files /dev/null and b/static/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_arc.png differ
diff --git a/static/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_flow.png b/static/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_flow.png
new file mode 100644
index 0000000000..1d450795c7
Binary files /dev/null and b/static/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_flow.png differ
diff --git a/static/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_time.png b/static/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_time.png
new file mode 100644
index 0000000000..9e0288ca47
Binary files /dev/null and b/static/Images/Architecture/Job_submission_preparation_and_execution_process/linkis_job_time.png differ
diff --git a/static/Images/Architecture/LinkisManager/ecHistoryArc.png b/static/Images/Architecture/LinkisManager/ecHistoryArc.png
new file mode 100644
index 0000000000..f40e3f928e
Binary files /dev/null and b/static/Images/Architecture/LinkisManager/ecHistoryArc.png differ
diff --git a/static/Images/Architecture/LinkisManager/ecHistoryTime.png b/static/Images/Architecture/LinkisManager/ecHistoryTime.png
new file mode 100644
index 0000000000..5451eb2e21
Binary files /dev/null and b/static/Images/Architecture/LinkisManager/ecHistoryTime.png differ


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org