You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@linkis.apache.org by ca...@apache.org on 2022/07/07 09:21:21 UTC

[incubator-linkis-website] branch dev updated: update execute api doc (#413)

This is an automated email from the ASF dual-hosted git repository.

casion pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git


The following commit(s) were added to refs/heads/dev by this push:
     new 6d5d933f70 update execute api doc (#413)
6d5d933f70 is described below

commit 6d5d933f70150480abcd8afdcef21eacf2635871
Author: peacewong <pe...@apache.org>
AuthorDate: Thu Jul 7 17:21:17 2022 +0800

    update execute api doc (#413)
    
    * optimize task execute interface
    
    * move architecture doc
---
 docs/api/linkis_task_operator.md                   | 345 +++++++++++++++++---
 docs/architecture/commons/message_scheduler.md     |  18 -
 docs/architecture/commons/variable.md              | 104 ++++++
 .../engine}/add_an_engine_conn.md                  |   4 +-
 .../engine/engine_conn.md                          |   2 +-
 .../engine/engine_conn_manager.md                  |   2 +-
 .../computation_governance_services/entrance.md    |   2 +-
 ...submission_preparation_and_execution_process.md |   2 +-
 .../computation_governance_services/linkis-cli.md  |   2 +-
 .../computation_governance_services}/proxy_user.md |   2 +-
 .../current/api/linkis_task_operator.md            | 275 ++++++++++++++--
 .../architecture/commons/message_scheduler.md      |  18 -
 .../current/architecture/commons/variable.md       | 111 +++++++
 .../engine}/add_an_engine_conn.md                  |   2 +-
 ...submission_preparation_and_execution_process.md |   4 +-
 .../computation_governance_services}/proxy_user.md |   2 +-
 .../version-1.1.1/api/linkis_task_operator.md      | 293 +++++++++++++++--
 .../version-1.1.2/api/linkis_task_operator.md      | 293 +++++++++++++++--
 .../engine}/add_an_engine_conn.md                  |  14 +-
 .../engine}/proxy_user.md                          |   2 +-
 ...submission_preparation_and_execution_process.md |  14 +-
 static/Images/Architecture/Commons/var_arc.png     | Bin 0 -> 15050 bytes
 .../version-1.1.1/api/linkis_task_operator.md      | 363 +++++++++++++++++----
 .../version-1.1.2/api/linkis_task_operator.md      | 363 +++++++++++++++++----
 .../engine}/add_an_engine_conn.md                  |   2 +-
 ...submission_preparation_and_execution_process.md |   2 +-
 .../computation_governance_services/linkis-cli.md  |   2 +-
 .../computation_governance_services}/proxy_user.md |   2 +-
 28 files changed, 1932 insertions(+), 313 deletions(-)

diff --git a/docs/api/linkis_task_operator.md b/docs/api/linkis_task_operator.md
index 203fa570ba..0fce9e63e8 100644
--- a/docs/api/linkis_task_operator.md
+++ b/docs/api/linkis_task_operator.md
@@ -25,45 +25,49 @@ sidebar_position: 2
  
 For more information about the Linkis Restful interface specification, please refer to: [Linkis Restful Interface Specification](/community/development_specification/api)
 
-### 1. Submit for Execution
+### 1. Submit task
 
-- Interface `/api/rest_j/v1/entrance/execute`
-
-- Submission method `POST`
-
-```json
-{
-    "executeApplicationName": "hive", //Engine type
-    "requestApplicationName": "dss", //Client service type
-    "executionCode": "show tables",
-    "params": {"variable": {}, "configuration": {}},
-    "runType": "hql", //The type of script to run
-    "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
-}
-```
 
 - Interface `/api/rest_j/v1/entrance/submit`
 
 - Submission method `POST`
 
+- Request Parameters
+
 ```json
 {
-    "executionContent": {"code": "show tables", "runType": "sql"},
-    "params": {"variable": {}, "configuration": {}},
-    "source": {"scriptPath": "file:///mnt/bdp/hadoop/1.hql"},
-    "labels": {
-        "engineType": "spark-2.4.3",
-        "userCreator": "hadoop-IDE"
+  "executionContent": {
+    "code": "show tables",
+    "runType": "sql"
+  },
+  "params": {
+    "variable": {// task variable 
+      "testvar": "hello" 
+    },
+    "configuration": {
+      "runtime": {// task runtime params 
+        "jdbc.url": "XX"
+      },
+      "startup": { // ec start up params 
+        "spark.executor.cores": "4"
+      }
     }
+  },
+  "source": { //task source information
+    "scriptPath": "file:///tmp/hadoop/test.sql"
+  },
+  "labels": {
+    "engineType": "spark-2.4.3",
+    "userCreator": "hadoop-IDE"
+  }
 }
 ```
 
-
--Return to example
+-Sample Response
 
 ```json
 {
- "method": "/api/rest_j/v1/entrance/execute",
+ "method": "/api/rest_j/v1/entrance/submit",
  "status": 0,
  "message": "Request executed successfully",
  "data": {
@@ -84,7 +88,7 @@ For more information about the Linkis Restful interface specification, please re
 
 - Submission method `GET`
 
-- Return to example
+- Sample Response
 
 ```json
 {
@@ -106,7 +110,7 @@ For more information about the Linkis Restful interface specification, please re
 
 - The request parameter fromLine refers to the number of lines from which to get, and size refers to the number of lines of logs that this request gets
 
-- Return example, where the returned fromLine needs to be used as a parameter for the next request of this interface
+- Sample Response, where the returned fromLine needs to be used as a parameter for the next request of this interface
 
 ```json
 {
@@ -121,38 +125,47 @@ For more information about the Linkis Restful interface specification, please re
 }
 ```
 
-### 4. Get Progress
+### 4. Get Progress and resource
 
-- Interface `/api/rest_j/v1/entrance/${execID}/progress`
+- Interface `/api/rest_j/v1/entrance/${execID}/progressWithResource`
 
 - Submission method `GET`
 
-- Return to example
+- Sample Response
 
 ```json
 {
-  "method": "/api/rest_j/v1/entrance/{execID}/progress",
+  "method": "/api/entrance/exec_id018017linkis-cg-entrance127.0.0.1:9205IDE_hadoop_spark_2/progressWithResource",
   "status": 0,
-  "message": "Return progress information",
+  "message": "OK",
   "data": {
-    "execID": "${execID}",
-    "progress": 0.2,
-    "progressInfo": [
+    "yarnMetrics": {
+      "yarnResource": [
         {
-        "id": "job-1",
-        "succeedTasks": 2,
-        "failedTasks": 0,
-        "runningTasks": 5,
-        "totalTasks": 10
-        },
-        {
-        "id": "job-2",
-        "succeedTasks": 5,
-        "failedTasks": 0,
-        "runningTasks": 5,
-        "totalTasks": 10
+          "queueMemory": 9663676416,
+          "queueCores": 6,
+          "queueInstances": 0,
+          "jobStatus": "COMPLETED",
+          "applicationId": "application_1655364300926_69504",
+          "queue": "default"
         }
-    ]
+      ],
+      "memoryPercent": 0.009,
+      "memoryRGB": "green",
+      "coreRGB": "green",
+      "corePercent": 0.02
+    },
+    "progress": 0.5,
+    "progressInfo": [
+      {
+        "succeedTasks": 4,
+        "failedTasks": 0,
+        "id": "jobId-1(linkis-spark-mix-code-1946915)",
+        "totalTasks": 6,
+        "runningTasks": 0
+      }
+    ],
+    "execID": "exec_id018017linkis-cg-entrance127.0.0.1:9205IDE_hadoop_spark_2"
   }
 }
 ```
@@ -163,6 +176,8 @@ For more information about the Linkis Restful interface specification, please re
 
 - Submission method `POST`
 
+- Sample Response
+
 ```json
 {
  "method": "/api/rest_j/v1/entrance/{execID}/kill",
@@ -172,4 +187,240 @@ For more information about the Linkis Restful interface specification, please re
    "execID":"${execID}"
   }
 }
+```
+
+### 6. Get task info
+
+- Interface `/api/rest_j/v1/jobhistory/{id}/get`
+
+- Submission method `GET`
+
+**Request Parameters**:
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|id|task id|path|true|string||
+
+
+- Sample Response
+
+````json
+{
+    "method": null,
+    "status": 0,
+    "message": "OK",
+    "data": {
+        "task": {
+                "taskID": 1,
+                "instance": "xxx",
+                "execId": "exec-id-xxx",
+                "umUser": "test",
+                "engineInstance": "xxx",
+                "progress": "10%",
+                "logPath": "hdfs://xxx/xxx/xxx",
+                "resultLocation": "hdfs://xxx/xxx/xxx",
+                "status": "FAILED",
+                "createdTime": "2019-01-01 00:00:00",
+                "updatedTime": "2019-01-01 01:00:00",
+                "engineType": "spark",
+                "errorCode": 100,
+                "errDesc": "Task Failed with error code 100",
+                "executeApplicationName": "hello world",
+                "requestApplicationName": "hello world",
+                "runType": "xxx",
+                "paramJson": "{\"xxx\":\"xxx\"}",
+                "costTime": 10000,
+                "strongerExecId": "execId-xxx",
+                "sourceJson": "{\"xxx\":\"xxx\"}"
+        }
+    }
+}
+````
+
+### 7. Get result set info
+
+Support for multiple result sets
+
+- Interface `/api/rest_j/v1/filesystem/getDirFileTrees`
+
+- Submission method `GET`
+
+**Request Parameters**:
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|path|result directory |query|true|string||
+
+
+- Sample Response
+
+````json
+{
+  "method": "/api/filesystem/getDirFileTrees",
+  "status": 0,
+  "message": "OK",
+  "data": {
+    "dirFileTrees": {
+      "name": "1946923",
+      "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923",
+      "properties": null,
+      "children": [
+        {
+          "name": "_0.dolphin",
+          "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923/_0.dolphin",//result set 1
+          "properties": {
+            "size": "7900",
+            "modifytime": "1657113288360"
+          },
+          "children": null,
+          "isLeaf": true,
+          "parentPath": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923"
+        },
+        {
+          "name": "_1.dolphin",
+          "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923/_1.dolphin",//result set 2
+          "properties": {
+            "size": "7900",
+            "modifytime": "1657113288614"
+          },
+          "children": null,
+          "isLeaf": true,
+          "parentPath": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923"
+        }
+      ],
+      "isLeaf": false,
+      "parentPath": null
+    }
+  }
+}
+````
+
+### 8. Get result content
+
+- Interface `/api/rest_j/v1/filesystem/openFile`
+
+- Submission method `GET`
+
+**Request Parameters**:
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|path|result path|query|true|string||
+|charset|Charset|query|false|string||
+|page|page number|query|false|ref||
+|pageSize|page size|query|false|ref||
+
+
+- Sample Response
+
+````json
+{
+  "method": "/api/filesystem/openFile",
+  "status": 0,
+  "message": "OK",
+  "data": {
+    "metadata": [
+      {
+        "columnName": "count(1)",
+        "comment": "NULL",
+        "dataType": "long"
+      }
+    ],
+    "totalPage": 0,
+    "totalLine": 1,
+    "page": 1,
+    "type": "2",
+    "fileContent": [
+      [
+        "28"
+      ]
+    ]
+  }
+}
+````
+
+
+### 9. Get Result by stream
+
+Get the result as a CSV or Excel file
+
+- Interface `/api/rest_j/v1/filesystem/resultsetToExcel`
+
+- Submission method `GET`
+
+**Request Parameters**:
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|autoFormat|Auto |query|false|boolean||
+|charset|charset|query|false|string||
+|csvSeerator|csv Separator|query|false|string||
+|limit|row limit|query|false|ref||
+|nullValue|null value|query|false|string||
+|outputFileName|Output file name|query|false|string||
+|outputFileType|Output file type csv or excel|query|false|string||
+|path|result path|query|false|string||
+|quoteRetouchEnable| Whether to quote modification|query|false|boolean||
+|sheetName|sheet name|query|false|string||
+
+
+- Response
+
+````json
+binary stream
+````
+
+
+### 10. Compatible with 0.x task submission interface
+
+- Interface `/api/rest_j/v1/entrance/execute`
+
+- Submission method `POST`
+
+
+- Request Parameters
+
+
+```json
+{
+    "executeApplicationName": "hive", //Engine type
+    "requestApplicationName": "dss", //Client service type
+    "executionCode": "show tables",
+    "params": {
+      "variable": {// task variable 
+        "testvar": "hello"
+      },
+      "configuration": {
+        "runtime": {// task runtime params 
+          "jdbc.url": "XX"
+        },
+        "startup": { // ec start up params 
+          "spark.executor.cores": "4"
+        }
+      }
+    },
+    "source": { //task source information
+      "scriptPath": "file:///tmp/hadoop/test.sql"
+    },
+    "labels": {
+      "engineType": "spark-2.4.3",
+      "userCreator": "hadoop-IDE"
+    },
+    "runType": "hql", //The type of script to run
+    "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
+}
+```
+
+- Sample Response
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/execute",
+ "status": 0,
+ "message": "Request executed successfully",
+ "data": {
+   "execID": "030418IDEhivebdpdwc010004:10087IDE_hadoop_21",
+   "taskID": "123"
+ }
+}
 ```
\ No newline at end of file
diff --git a/docs/architecture/commons/message_scheduler.md b/docs/architecture/commons/message_scheduler.md
deleted file mode 100644
index b6ebca3fdd..0000000000
--- a/docs/architecture/commons/message_scheduler.md
+++ /dev/null
@@ -1,18 +0,0 @@
----
-title: Message Scheduler Module
-sidebar_position: 1
----
-## 1 Overview
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis-RPC can realize the communication between microservices. In order to simplify the use of RPC, Linkis provides the Message-Scheduler module, which is annotated by @Receiver Analyze, identify and call. At the same time, it also unifies the use of RPC and Restful interfaces, which has better scalability.
-## 2. Architecture description
-## 2.1. Architecture design diagram
-![Module Design Drawing](/Images/Architecture/Commons/linkis-message-scheduler.png)
-## 2.2. Module description
-* ServiceParser: Parse the (Object) object of the Service module, and encapsulate the @Receiver annotated method into the ServiceMethod object.
-* ServiceRegistry: Register the corresponding Service module, and store the ServiceMethod parsed by the Service in the Map container.
-* ImplicitParser: parse the object of the Implicit module, and the method annotated with @Implicit will be encapsulated into the ImplicitMethod object.
-* ImplicitRegistry: Register the corresponding Implicit module, and store the resolved ImplicitMethod in a Map container.
-* Converter: Start to scan the non-interface non-abstract subclass of RequestMethod and store it in the Map, parse the Restful and match the related RequestProtocol.
-* Publisher: Realize the publishing scheduling function, find the ServiceMethod matching the RequestProtocol in the Registry, and encapsulate it as a Job for submission scheduling.
-* Scheduler: Scheduling implementation, using Linkis-Scheduler to execute the job and return the MessageJob object.
-* TxManager: Complete transaction management, perform transaction management on job execution, and judge whether to commit or rollback after the job execution ends.
\ No newline at end of file
diff --git a/docs/architecture/commons/variable.md b/docs/architecture/commons/variable.md
new file mode 100644
index 0000000000..e026e1bb6a
--- /dev/null
+++ b/docs/architecture/commons/variable.md
@@ -0,0 +1,104 @@
+---
+title: Custom Variable Design
+sidebar_position: 1
+---
+
+## 1. General
+### Requirements Background
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Users want to be able to define some common variables when writing code and then replace them during execution. For example, users run the same sql in batches every day, and need to specify the partition time of the previous day. If based on sql It will be more complicated to write if the system provides a variable of run_date which will be very convenient to use.
+### Target
+1. Support variable substitution of task code
+2. Support custom variables, support users to define custom variables in scripts and task parameters submitted to Linkis, support simple +, - and other calculations
+3. Preset system variables: run_date, run_month, run_today and other system variables
+
+## 2. Overall Design
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;During the execution of the Linkis task, the custom variables are carried out in Entrance, mainly through the interceptor of Entrance before the task is submitted and executed. The variable and the defined variable, and complete the code replacement through the initial value of the custom variable passed in by the task, and become the final executable code.
+
+### 2.1 Technical Architecture
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The overall structure of custom variables is as follows. After the task is submitted, it will go through the variable replacement interceptor. First, all variables and expressions used in the code will be parsed, and then replaced with the system and user-defined initial values ​​of variables, and finally the parsed code will be submitted to EngineConn for execution. So the underlying engine is already replaced code.
+
+![var_arc](/Images/Architecture/Commons/var_arc.png)
+
+### 3 Function introduction
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The variable types supported by Linkis are divided into custom variables and system built-in variables. The internal variables are predefined by Linkis and can be used directly. Then different variable types support different calculation formats: String supports +, integer decimal supports +-*/, date supports +-.
+
+### 3.1 Built-in variables
+The currently supported built-in variables are as follows:
+
+| variable name | variable type | variable meaning | variable value example |
+| ------ | -------- | -------- | ------------ |
+| run\_date | String | Data statistics time (support user's own setting, the default setting is the day before the current time), if the data of yesterday is executed today, it will be the time of yesterday, the format is yyyyMMdd | 20180129 |
+| run\_date\_std | String | Data statistics time (standard date format), if yesterday's data is executed today, it will be yesterday's time, the format is yyyy-MM-dd | 2018-01-29 |
+| run_today | String | The day after run_date (data statistics time), the format is yyyyMMdd | 20211210 |
+| run_today_std | String | The day after run_date (data statistics time) (standard format), the format is yyyy-MM-dd | 2021-12-10 |
+| run_mon | String | The month of the data statistics time, the format is yyyyMM | 202112 |
+| run_mon_std | String | The month of the data statistics time (standard format), the format is yyyy-MM | 2021-12 |
+| run\_month\_begin | String | The first day of the month in which the data is counted, in the format yyyyMMdd | 20180101 |
+| run\_month\_begin\_std | String | The first day of the month where the data statistics time is (standard date format), the format is yyyy-MM-dd | 2018-01-01 |
+| run_month_now_begin | String | The first day of the month where run_today is in the format yyyyMMdd | 20211201 |
+| run_month_now_begin_std | String | The first day of the month run_today (standard format), the format is yyyy-MM-dd | 2021-12-01 |
+| run\_month\_end | String | The last day of the month in which the data is counted, in the format yyyyMMdd | 20180131 |
+| run\_month\_end\_std | String | The last day of the month in which the data is counted (standard date format), the format is yyyy-MM-dd | 2018-01-31 |
+| run_month_now_end | String | The last day of the month where run_today is in the format yyyyMMdd | 20211231 |
+| run_month_now_end_std | String | The last day of the month in which run_today is located (standard date format), the format is yyyy-MM-dd | 2021-12-31 |
+| run_quarter_begin | String | The first day of the quarter in which the data is counted, in the format yyyyMMdd | 20210401 |
+| run_quarter_end | String | The last day of the quarter in which the data is counted, in the format yyyyMMdd | 20210630 |
+| run_half_year_begin | String | The first day of the half year where the data statistics time is located, in the format yyyyMMdd | 20210101 |
+| run_half_year_end | String | The last day of the half year where the data statistics time is located, the format is yyyyMMdd | 20210630 |
+| run_year_begin | String | The first day of the year in which the data is counted, in the format yyyyMMdd | 20210101 |
+| run_year_end | String | The last day of the year in which the data is counted, in the format yyyyMMdd | 20211231 |
+| run_quarter_begin_std | String | The first day of the quarter in which the data is counted (standard format), the format is yyyy-MM-dd | 2021-10-01 |
+| run_quarter_end_std | String | The last day of the quarter where the data statistics time is located (standard format), the format is yyyy-MM-dd | 2021-12-31 |
+| run_half_year_begin_std | String | The first day of the half year where the data statistics time is located (standard format), the format is yyyy-MM-dd | 2021-07-01 |
+| run_half_year_end_std | String | The last day of the half year where the data statistics time is located (standard format), the format is yyyy-MM-dd | 2021-12-31 |
+| run_year_begin_std | String | The first day of the year in which the data is counted (standard format), the format is yyyy-MM-dd | 2021-01-01 |
+| run_year_end_std | String | The last day of the year in which the data is counted (standard format), the format is yyyy-MM-dd | 2021-12-31 |
+
+details:
+
+1. run_date is the core built-in date variable, which supports user-defined date. If not specified, the default is the day before the current system time.
+2. Definition of other derived built-in date variables: other date built-in variables are calculated relative to run_date. Once run_date changes, other variable values ​​will also change automatically. Other date variables do not support setting initial values ​​and can only be modified by modifying run_date. .
+3. Built-in variables support more abundant usage scenarios: ${run_date-1} is the day before run_data; ${run_month_begin-1} is the first day of the previous month of run_month_begin, where -1 means minus one month.
+
+### 3.2 Custom variables
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;What are custom variables? User variables that are defined first and then used. User-defined variables temporarily support the definition of strings, integers, and floating-point variables. Strings support the + method, and integers and floating-point numbers support the +-*/ method. User-defined variables do not conflict with the set variable syntax supported by SparkSQL and HQL, but the same name is not allowed. How to define and use custom variables? as follows:
+````
+## Defined in the code, specified before the task code
+sql type definition method:
+--@set f=20.1
+The python/shell types are defined as follows:
+#@set f=20.1
+Note: Only one variable can be defined on one line
+````
+The use is directly used in the code through ```{varName expression}, such as ${f*2}```
+
+### 3.3 Variable scope
+Custom variables in linkis also have scope, and the priority is that the variable defined in the script is greater than the Variable defined in the task parameter is greater than the built-in run_date variable. The task parameters are defined as follows:
+````
+##restful
+{
+    "executionContent": {"code": "select \"${f-1}\";", "runType": "sql"},
+    "params": {
+                    "variable": {f: "20.1"},
+                    "configuration": {
+                            "runtime": {
+                                "linkis.openlookeng.url":"http://127.0.0.1:9090"
+                                }
+                            }
+                    },
+    "source": {"scriptPath": "file:///mnt/bdp/hadoop/1.sql"},
+    "labels": {
+        "engineType": "spark-2.4.3",
+        "userCreator": "hadoop-IDE"
+    }
+}
+## java SDK
+JobSubmitAction.builder
+  .addExecuteCode(code)
+  .setStartupParams(startupMap)
+  .setUser(user) //submit user
+  .addExecuteUser(user) //execute user
+  .setLabels(labels)
+  .setVariableMap(varMap) //setVar
+  .build
+````
\ No newline at end of file
diff --git a/versioned_docs/version-1.1.2/architecture/add_an_engine_conn.md b/docs/architecture/computation_governance_services/engine/add_an_engine_conn.md
similarity index 99%
rename from versioned_docs/version-1.1.2/architecture/add_an_engine_conn.md
rename to docs/architecture/computation_governance_services/engine/add_an_engine_conn.md
index b69da6ef2a..5bb2e3a23c 100644
--- a/versioned_docs/version-1.1.2/architecture/add_an_engine_conn.md
+++ b/docs/architecture/computation_governance_services/engine/add_an_engine_conn.md
@@ -1,6 +1,6 @@
 ---
-title: Add an EngineConn
-sidebar_position: 3
+title: Start  engineConn
+sidebar_position: 4
 ---
 # How to add an EngineConn
 
diff --git a/docs/architecture/computation_governance_services/engine/engine_conn.md b/docs/architecture/computation_governance_services/engine/engine_conn.md
index 43d3b8a7b9..07e3272f6d 100644
--- a/docs/architecture/computation_governance_services/engine/engine_conn.md
+++ b/docs/architecture/computation_governance_services/engine/engine_conn.md
@@ -1,6 +1,6 @@
 ---
 title: EngineConn Design
-sidebar_position: 3
+sidebar_position: 1
 ---
 
 EngineConn architecture design
diff --git a/docs/architecture/computation_governance_services/engine/engine_conn_manager.md b/docs/architecture/computation_governance_services/engine/engine_conn_manager.md
index 8c8401dc92..383a614f78 100644
--- a/docs/architecture/computation_governance_services/engine/engine_conn_manager.md
+++ b/docs/architecture/computation_governance_services/engine/engine_conn_manager.md
@@ -1,6 +1,6 @@
 ---
 title: EngineConnManager Design
-sidebar_position: 3
+sidebar_position: 2
 ---
 
 EngineConnManager architecture design
diff --git a/docs/architecture/computation_governance_services/entrance.md b/docs/architecture/computation_governance_services/entrance.md
index ab8b1071bf..ac42ed656f 100644
--- a/docs/architecture/computation_governance_services/entrance.md
+++ b/docs/architecture/computation_governance_services/entrance.md
@@ -1,6 +1,6 @@
 ---
 title: Entrance Architecture Design
-sidebar_position: 3
+sidebar_position: 1
 ---
 
 The Links task submission portal is used to receive, schedule, forward execution requests, life cycle management services for computing tasks, and can return calculation results, logs, and progress to the caller. It is split from the Entrance of Linkis0.X Native capabilities.
diff --git a/docs/architecture/job_submission_preparation_and_execution_process.md b/docs/architecture/computation_governance_services/job_submission_preparation_and_execution_process.md
similarity index 99%
rename from docs/architecture/job_submission_preparation_and_execution_process.md
rename to docs/architecture/computation_governance_services/job_submission_preparation_and_execution_process.md
index dd6e8436c9..1eab642967 100644
--- a/docs/architecture/job_submission_preparation_and_execution_process.md
+++ b/docs/architecture/computation_governance_services/job_submission_preparation_and_execution_process.md
@@ -62,7 +62,7 @@ If the user has a reusable EngineConn in LinkisManager, the EngineConn is direct
 
 How to define a reusable EngineConn? It refers to those that can match all the label requirements of the computing task, and the EngineConn's own health status is Healthy (the load is low and the actual status is Idle). Then, all the EngineConn that meets the conditions are sorted and selected according to the rules, and finally the best one is locked.
 
-If the user does not have a reusable EngineConn, a process to request a new EngineConn will be triggered at this time. Regarding the process, please refer to: [How to add an EngineConn](add_an_engine_conn.md).
+If the user does not have a reusable EngineConn, a process to request a new EngineConn will be triggered at this time. Regarding the process, please refer to: [How to add an EngineConn](engine/add_an_engine_conn.md).
 
 #### 2.2 Orchestrate a computing task
 
diff --git a/docs/architecture/computation_governance_services/linkis-cli.md b/docs/architecture/computation_governance_services/linkis-cli.md
index 8951033d50..66005d8bd2 100644
--- a/docs/architecture/computation_governance_services/linkis-cli.md
+++ b/docs/architecture/computation_governance_services/linkis-cli.md
@@ -1,6 +1,6 @@
 ---
 title: Linkis-Client Architecture Design
-sidebar_position: 4
+sidebar_position: 3
 ---
 
 Provide users with a lightweight client that submits tasks to Linkis for execution.
diff --git a/versioned_docs/version-1.1.2/architecture/proxy_user.md b/docs/architecture/computation_governance_services/proxy_user.md
similarity index 98%
rename from versioned_docs/version-1.1.2/architecture/proxy_user.md
rename to docs/architecture/computation_governance_services/proxy_user.md
index 9f290e99aa..c6336ec547 100644
--- a/versioned_docs/version-1.1.2/architecture/proxy_user.md
+++ b/docs/architecture/computation_governance_services/proxy_user.md
@@ -1,6 +1,6 @@
 ---
 title: Proxy User Mode
-sidebar_position: 2
+sidebar_position: 4
 ---
 
 ## 1 Background
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/api/linkis_task_operator.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/api/linkis_task_operator.md
index 02b749fd45..26c930b138 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/api/linkis_task_operator.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/api/linkis_task_operator.md
@@ -22,43 +22,44 @@
 
 ### 1. 提交执行
 
-- 接口 `/api/rest_j/v1/entrance/execute`
-
-- 提交方式 `POST`
-
-```json
-{
-    "executeApplicationName": "hive", //引擎类型
-    "requestApplicationName": "dss", //客户端服务类型
-    "executionCode": "show tables",
-    "params": {"variable": {}, "configuration": {}},
-    "runType": "hql", //运行的脚本类型
-   "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
-}
-```
-
 - 接口 `/api/rest_j/v1/entrance/submit`
 
 - 提交方式 `POST`
 
 ```json
 {
-    "executionContent": {"code": "show tables", "runType":  "sql"},
-    "params": {"variable": {}, "configuration": {}},
-    "source":  {"scriptPath": "file:///mnt/bdp/hadoop/1.hql"},
-    "labels": {
-        "engineType": "spark-2.4.3",
-        "userCreator": "hadoop-IDE"
+  "executionContent": {
+    "code": "show tables",
+    "runType": "sql"
+  },
+  "params": {
+    "variable": {// task variable 
+      "testvar": "hello"
+    },
+    "configuration": {
+      "runtime": {// task runtime params 
+        "jdbc.url": "XX"
+      },
+      "startup": { // ec start up params 
+        "spark.executor.cores": "4"
+      }
     }
+  },
+  "source": { //task source information
+    "scriptPath": "file:///tmp/hadoop/test.sql"
+  },
+  "labels": {
+    "engineType": "spark-2.4.3",
+    "userCreator": "hadoop-IDE"
+  }
 }
 ```
 
-
 - 返回示例
 
 ```json
 {
- "method": "/api/rest_j/v1/entrance/execute",
+ "method": "/api/rest_j/v1/entrance/submit",
  "status": 0,
  "message": "请求执行成功",
  "data": {
@@ -158,6 +159,8 @@
 
 - 提交方式 `GET`
 
+- 返回示例
+
 ```json
 {
  "method": "/api/rest_j/v1/entrance/{execID}/kill",
@@ -169,3 +172,229 @@
 }
 ```
 
+### 6. 获取任务信息
+
+- 接口 `/api/rest_j/v1/jobhistory/{id}/get`
+
+- 提交方式 `GET`
+
+- 请求参数
+
+| 参数名称 | 参数说明 | 请求类型    | 是否必须 | 数据类型 | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|id|id|path|true|string||
+
+
+- 返回示例
+
+```json
+{
+  "method": null,
+  "status": 0,
+  "message": "OK",
+  "data": {
+    "task": {
+      "taskID": 1,
+      "instance": "xxx",
+      "execId": "exec-id-xxx",
+      "umUser": "test",
+      "engineInstance": "xxx",
+      "progress": "10%",
+      "logPath": "hdfs://xxx/xxx/xxx",
+      "resultLocation": "hdfs://xxx/xxx/xxx",
+      "status": "FAILED",
+      "createdTime": "2019-01-01 00:00:00",
+      "updatedTime": "2019-01-01 01:00:00",
+      "engineType": "spark",
+      "errorCode": 100,
+      "errDesc": "Task Failed with error code 100",
+      "executeApplicationName": "hello world",
+      "requestApplicationName": "hello world",
+      "runType": "xxx",
+      "paramJson": "{\"xxx\":\"xxx\"}",
+      "costTime": 10000,
+      "strongerExecId": "execId-xxx",
+      "sourceJson": "{\"xxx\":\"xxx\"}"
+    }
+  }
+}
+```
+
+### 7. 获取结果集信息
+
+支持多结果集信息
+
+- 接口 `/api/rest_j/v1/filesystem/getDirFileTrees`
+
+- 提交方式 `GET`
+
+- 请求参数
+
+| 参数名称 | 参数说明 | 请求类型    | 是否必须 | 数据类型 | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|path|结果集目录路径|query|true|string||
+
+
+- 返回示例
+
+```json
+{
+  "method": "/api/filesystem/getDirFileTrees",
+  "status": 0,
+  "message": "OK",
+  "data": {
+    "dirFileTrees": {
+      "name": "1946923",
+      "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923",
+      "properties": null,
+      "children": [
+        {
+          "name": "_0.dolphin",
+          "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923/_0.dolphin",//result set 1
+          "properties": {
+            "size": "7900",
+            "modifytime": "1657113288360"
+          },
+          "children": null,
+          "isLeaf": true,
+          "parentPath": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923"
+        },
+        {
+          "name": "_1.dolphin",
+          "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923/_1.dolphin",//result set 2
+          "properties": {
+            "size": "7900",
+            "modifytime": "1657113288614"
+          },
+          "children": null,
+          "isLeaf": true,
+          "parentPath": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923"
+        }
+      ],
+      "isLeaf": false,
+      "parentPath": null
+    }
+  }
+}
+```
+
+### 8. 获取结果集内容
+
+- 接口 `/api/rest_j/v1/filesystem/openFile`
+
+- 提交方式 `GET`
+
+- 请求参数
+
+| 参数名称 | 参数说明 | 请求类型    | 是否必须 | 数据类型 | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|path|结果集文件|query|true|string||
+|charset|字符集|query|false|string||
+|page|页码|query|false|ref||
+|pageSize|页面大小|query|false|ref||
+
+
+- 返回示例
+
+```json
+{
+  "method": "/api/filesystem/openFile",
+  "status": 0,
+  "message": "OK",
+  "data": {
+    "metadata": [
+      {
+        "columnName": "count(1)",
+        "comment": "NULL",
+        "dataType": "long"
+      }
+    ],
+    "totalPage": 0,
+    "totalLine": 1,
+    "page": 1,
+    "type": "2",
+    "fileContent": [
+      [
+        "28"
+      ]
+    ]
+  }
+}
+```
+
+### 9. 获取结果集按照文件流的方式
+
+获取结果集为CSV和Excel按照流的方式
+
+- 接口 `/api/rest_j/v1/filesystem/resultsetToExcel`
+
+- 提交方式 `GET`
+
+- 请求参数
+
+| 参数名称 | 参数说明 | 请求类型    | 是否必须 | 数据类型 | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|autoFormat|是否自动转换格式|query|false|boolean||
+|charset|字符集|query|false|string||
+|csvSeperator|csv分隔栏|query|false|string||
+|limit|获取行数|query|false|ref||
+|nullValue|空值转换|query|false|string||
+|outputFileName|输出文件名称|query|false|string||
+|outputFileType|输出文件类型 csv 或者Excel|query|false|string||
+|path|结果集路径|query|false|string||
+
+- 返回示例
+
+```json
+文件流
+```
+
+### 10. 兼容0.X的任务执行接口
+
+- 接口 `/api/rest_j/v1/entrance/execute`
+
+- 提交方式 `POST`
+
+```json
+{
+    "executeApplicationName": "hive", //Engine type
+    "requestApplicationName": "dss", //Client service type
+    "executionCode": "show tables",
+    "params": {
+      "variable": {// task variable 
+        "testvar": "hello"
+      },
+      "configuration": {
+        "runtime": {// task runtime params 
+          "jdbc.url": "XX"
+        },
+        "startup": { // ec start up params 
+          "spark.executor.cores": "4"
+        }
+      }
+    },
+    "source": { //task source information
+      "scriptPath": "file:///tmp/hadoop/test.sql"
+    },
+    "labels": {
+      "engineType": "spark-2.4.3",
+      "userCreator": "hadoop-IDE"
+    },
+    "runType": "hql", //The type of script to run
+    "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
+}
+```
+
+- Sample Response
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/execute",
+ "status": 0,
+ "message": "Request executed successfully",
+ "data": {
+   "execID": "030418IDEhivebdpdwc010004:10087IDE_hadoop_21",
+   "taskID": "123"
+ }
+}
+```
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/commons/message_scheduler.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/commons/message_scheduler.md
deleted file mode 100644
index f4d39eeace..0000000000
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/commons/message_scheduler.md
+++ /dev/null
@@ -1,18 +0,0 @@
----
-title: Message Scheduler 模块
-sidebar_position: 1
----
-## 1. 概述
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis-RPC可以实现微服务之间的通信,为了简化RPC的使用方式,Linkis提供Message-Scheduler模块,通过如@Receiver注解的方式的解析识别与调用,同时,也统一了RPC和Restful接口的使用方式,具有更好的可拓展性。
-## 2. 架构说明
-## 2.1. 架构设计图
-![模块设计图](/Images-zh/Architecture/Commons/linkis-message-scheduler.png)
-## 2.2. 模块说明
-* ServiceParser:解析Service模块的(Object)对象,同时把@Receiver注解的方法封装到ServiceMethod对象中。
-* ServiceRegistry:注册对应的Service模块,将Service解析后的ServiceMethod存储在Map容器中。
-* ImplicitParser:将Implicit模块的对象进行解析,使用@Implicit标注的方法会被封装到ImplicitMethod对象中。
-* ImplicitRegistry:注册对应的Implicit模块,将解析后的ImplicitMethod存储在一个Map容器中。
-* Converter:启动扫描RequestMethod的非接口非抽象的子类,并存储在Map中,解析Restful并匹配相关的RequestProtocol。
-* Publisher:实现发布调度功能,在Registry中找出匹配RequestProtocol的ServiceMethod,并封装为Job进行提交调度。
-* Scheduler:调度实现,使用Linkis-Sceduler执行Job,返回MessageJob对象。
-* TxManager:完成事务管理,对Job执行进行事务管理,在Job执行结束后判断是否进行Commit或者Rollback。
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/commons/variable.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/commons/variable.md
new file mode 100644
index 0000000000..c1ca87e546
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/commons/variable.md
@@ -0,0 +1,111 @@
+---
+title: 自定义变量设计
+sidebar_position: 1
+---
+
+## 1. 总述
+### 需求背景
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;用户希望在写代码时,能够定义一些公共变量然后执行的时候进行替换,比如用户每天都会批量跑同一段sql,需要指定上一天的分区时间,如果基于sql去写会比较复杂如果系统提供一个run_date的变量将会非常方便使用。
+### 目标
+1. 支持任务代码的变量替换
+2. 支持自定义变量,支持用户在脚本和提交给Linkis的任务参数定义自定义变量,支持简单的+,-等计算
+3. 预设置系统变量:run_date,run_month,run_today等系统变量
+
+## 2. 总体设计
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;在Linkis任务执行过程中自定义变量在Entrance进行,主要通过Entrance在任务提交执行前的拦截器进行拦截替换实现,通过正则表达式获取到任务代码中使用到的变量和定义的变量,并通过任务传入的自定义变量初始值完成代码的替换,变成最终可以执行的代码。
+
+### 2.1 技术架构
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;自定义变量整体架构如下,用于任务提交过来后,会经过变量替换拦截器。首先会解析出所有代码中用到的变量和表达式,然后通过和系统以及用户自定义的变量初始值进行替换,最终将解析后的代码提交给EngineConn执行。所以到底层引擎已经是替换好的代码。
+
+![var_arc](/Images/Architecture/Commons/var_arc.png)
+
+### 3 功能介绍
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis支持的变量类型分为自定义变量和系统内置变量,内部变量是Linkis预先定义好的,可以直接进行使用。然后不同的变量类型支持不同的计算格式:String支持+、整数小数支持+-*/,日期支持+-。
+
+### 3.1 内置变量
+目前已支持的内置变量如下:
+
+| 变量名 | 变量类型 | 变量含义 | 变量值举例 |
+| ------ | -------- | -------- | ---------- |
+| run\_date | String | 数据统计时间(支持用户自己设定,默认设置为当前时间的前一天),如今天执行昨天的数据,则为昨天的时间,格式为 yyyyMMdd | 20180129 |
+| run\_date\_std | String | 数据统计时间(标准日期格式),如今天执行昨天数据,则为昨天的时间,格式为 yyyy-MM-dd | 2018-01-29 |
+| run_today | String | run_date (数据统计时间) 的后一天,格式为 yyyyMMdd | 20211210 |
+| run_today_std | String | run_date (数据统计时间) 的后一天(标准格式),格式为 yyyy-MM-dd | 2021-12-10 |
+| run_mon | String | 数据统计时间所在月,格式为 yyyyMM | 202112 |
+| run_mon_std | String | 数据统计时间所在月(标准格式),格式为 yyyy-MM | 2021-12 |
+| run\_month\_begin | String | 数据统计时间所在月的第一天,格式为 yyyyMMdd | 20180101 |
+| run\_month\_begin\_std | String | 数据统计时间所在月的第一天(标准日期格式),格式为 yyyy-MM-dd | 2018-01-01 |
+| run_month_now_begin | String | run_today 所在月的第一天,格式为 yyyyMMdd | 20211201 |
+| run_month_now_begin_std | String | run_today 所在月的第一天(标准格式),格式为 yyyy-MM-dd | 2021-12-01 |
+| run\_month\_end | String | 数据统计时间所在月的最后一天,格式为 yyyyMMdd | 20180131 |
+| run\_month\_end\_std | String | 数据统计时间所在月的最后一天(标准日期格式),格式为 yyyy-MM-dd | 2018-01-31 |
+| run_month_now_end | String | run_today 所在月的最后一天,格式为 yyyyMMdd | 20211231 |
+| run_month_now_end_std | String | run_today 所在月的最后一天(标准日期格式),格式为 yyyy-MM-dd | 2021-12-31 |
+| run_quarter_begin | String | 数据统计时间所在季度的第一天,格式为 yyyyMMdd | 20210401 |
+| run_quarter_end | String | 数据统计时间所在季度的最后一天,格式为 yyyyMMdd | 20210630 |
+| run_half_year_begin | String | 数据统计时间所在半年的第一天,格式为 yyyyMMdd | 20210101 |
+| run_half_year_end | String | 数据统计时间所在半年的最后一天,格式为 yyyyMMdd | 20210630 |
+| run_year_begin | String | 数据统计时间所在年的第一天,格式为 yyyyMMdd | 20210101 |
+| run_year_end | String | 数据统计时间所在年的最后一天,格式为 yyyyMMdd | 20211231 |
+| run_quarter_begin_std | String | 数据统计时间所在季度的第一天(标准格式),格式为 yyyy-MM-dd | 2021-10-01 |
+| run_quarter_end_std | String | 数据统计时间所在季度的最后一天(标准格式),格式为 yyyy-MM-dd | 2021-12-31 |
+| run_half_year_begin_std | String | 数据统计时间所在半年的第一天(标准格式),格式为 yyyy-MM-dd | 2021-07-01 |
+| run_half_year_end_std | String | 数据统计时间所在半年的最后一天(标准格式),格式为 yyyy-MM-dd | 2021-12-31 |
+| run_year_begin_std | String | 数据统计时间所在年的第一天(标准格式),格式为 yyyy-MM-dd | 2021-01-01 |
+| run_year_end_std | String | 数据统计时间所在年的最后一天(标准格式),格式为 yyyy-MM-dd | 2021-12-31 |
+
+具体细节:
+
+1、run_date为核心自带日期变量,支持用户自定义日期,如果不指定默认为当前系统时间的前一天。
+2、其他衍生内置日期变量定义:其他日期内置变量都是相对run_date计算出来的,一旦run_date变化,其他变量值也会自动跟着变化,其他日期变量不支持设置初始值,只能通过修改run_date进行修改。
+3、内置变量支持更加丰富的使用场景:${run_date-1}为run_data的前一天;${run_month_begin-1}为run_month_begin的上个月的第一天,这里的-1表示减一个月。
+
+### 3.2 自定义变量
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;什么是自定义变量?先定义,后使用的用户变量。用户自定义变量暂时支持字符串,整数,浮点数变量的定义,其中字符串支持+法,整数和浮点数支持+-*/方法。用户自定义变量与SparkSQL和HQL本身支持的set变量语法不冲突,但是不允许同名。如何定义和使用自定义变量?如下:
+```
+## 代码中定义,在任务代码前进行指定
+sql类型定义方式:
+--@set f=20.1
+python/Shell类型定义如下:
+#@set f=20.1
+注意:只支持一行定义一个变量
+```
+使用都是直接在代码中使用通过 ```{varName表达式},如${f*2}```
+
+### 3.3 变量作用域
+在linkis中自定义变量也有作用域,优先级为脚本中定义的变量大于在任务参数中定义的Variable大于内置的run_date变量。任务参数中定义如下:
+```
+## restful
+{
+    "executionContent": {"code": "select \"${f-1}\";", "runType":  "sql"},
+    "params": {
+                    "variable": {f: "20.1"},
+                    "configuration": {
+                            "runtime": {
+                                "linkis.openlookeng.url":"http://127.0.0.1:9090"
+                                }
+                            }
+                    },
+    "source":  {"scriptPath": "file:///mnt/bdp/hadoop/1.sql"},
+    "labels": {
+        "engineType": "spark-2.4.3",
+        "userCreator": "hadoop-IDE"
+    }
+}
+## java SDK
+JobSubmitAction.builder
+  .addExecuteCode(code)
+  .setStartupParams(startupMap)
+  .setUser(user) //submit user
+  .addExecuteUser(user) //execute user
+  .setLabels(labels)
+  .setVariableMap(varMap) //setVar
+  .build
+```
+
+
+
+
+
+
+
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/add_an_engine_conn.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation_governance_services/engine/add_an_engine_conn.md
similarity index 99%
rename from i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/add_an_engine_conn.md
rename to i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation_governance_services/engine/add_an_engine_conn.md
index 454e579894..5213291fb4 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/add_an_engine_conn.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation_governance_services/engine/add_an_engine_conn.md
@@ -1,4 +1,4 @@
-# EngineConn 新增流程
+# EngineConn 启动流程
 
 EngineConn的新增,是Linkis计算治理的计算任务准备阶段的核心流程之一。它主要包括了Client端(Entrance或用户客户端)向LinkisManager发起一个新增EngineConn的请求,LinkisManager为用户按需、按标签规则,向EngineConnManager发起一个启动EngineConn的请求,并等待EngineConn启动完成后,将可用的EngineConn返回给Client的整个流程。
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/job_submission_preparation_and_execution_process.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation_governance_services/job_submission_preparation_and_execution_process.md
similarity index 99%
rename from i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/job_submission_preparation_and_execution_process.md
rename to i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation_governance_services/job_submission_preparation_and_execution_process.md
index d0a8dcfc82..ca770ffcd1 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/job_submission_preparation_and_execution_process.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation_governance_services/job_submission_preparation_and_execution_process.md
@@ -1,5 +1,5 @@
 ---
-title: Job 提交准备执行流程
+title: Linkis任务执行流程
 sidebar_position: 1
 ---
 
@@ -74,7 +74,7 @@ POST /api/rest_j/v1/entrance/submit
 
 如何定义可复用EngineConn?指能匹配计算任务的所有标签要求的,且EngineConn本身健康状态为Healthy(负载低且实际EngineConn状态为Idle)的,然后再按规则对所有满足条件的EngineConn进行排序选择,最终锁定一个最佳的EngineConn。
 
-如果该用户不存在可复用的EngineConn,则此时会触发EngineConn新增流程,关于EngineConn新增流程,请参阅:[EngineConn新增流程](add_an_engine_conn.md) 。
+如果该用户不存在可复用的EngineConn,则此时会触发EngineConn新增流程,关于EngineConn新增流程,请参阅:[EngineConn新增流程](engine/add_an_engine_conn.md) 。
 
 #### 2.2 计算任务编排
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/proxy_user.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation_governance_services/proxy_user.md
similarity index 97%
rename from i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/proxy_user.md
rename to i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation_governance_services/proxy_user.md
index 1b76b44af3..8e865edf83 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/proxy_user.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/computation_governance_services/proxy_user.md
@@ -1,5 +1,5 @@
 ---
-title: 代理用户模式
+title: Linkis支持代理用户提交架构涉及
 sidebar_position: 2
 ---
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.1/api/linkis_task_operator.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.1/api/linkis_task_operator.md
index 02b749fd45..0db59ebdbc 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.1/api/linkis_task_operator.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.1/api/linkis_task_operator.md
@@ -4,61 +4,62 @@
 
 ```json
 {
- "method": "",
- "status": 0,
- "message": "",
- "data": {}
+  "method": "",
+  "status": 0,
+  "message": "",
+  "data": {}
 }
 ```
 
 **约定**:
 
- - method:返回请求的Restful API URI,主要是 WebSocket 模式需要使用。
- - status:返回状态信息,其中:-1表示没有登录,0表示成功,1表示错误,2表示验证失败,3表示没该接口的访问权限。
- - data:返回具体的数据。
- - message:返回请求的提示信息。如果status非0时,message返回的是错误信息,其中data有可能存在stack字段,返回具体的堆栈信息。 
- 
+- method:返回请求的Restful API URI,主要是 WebSocket 模式需要使用。
+- status:返回状态信息,其中:-1表示没有登录,0表示成功,1表示错误,2表示验证失败,3表示没该接口的访问权限。
+- data:返回具体的数据。
+- message:返回请求的提示信息。如果status非0时,message返回的是错误信息,其中data有可能存在stack字段,返回具体的堆栈信息。
+
 更多关于 Linkis Restful 接口的规范,请参考:[Linkis Restful 接口规范](/community/development_specification/api)
 
 ### 1. 提交执行
 
-- 接口 `/api/rest_j/v1/entrance/execute`
-
-- 提交方式 `POST`
-
-```json
-{
-    "executeApplicationName": "hive", //引擎类型
-    "requestApplicationName": "dss", //客户端服务类型
-    "executionCode": "show tables",
-    "params": {"variable": {}, "configuration": {}},
-    "runType": "hql", //运行的脚本类型
-   "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
-}
-```
-
 - 接口 `/api/rest_j/v1/entrance/submit`
 
 - 提交方式 `POST`
 
 ```json
 {
-    "executionContent": {"code": "show tables", "runType":  "sql"},
-    "params": {"variable": {}, "configuration": {}},
-    "source":  {"scriptPath": "file:///mnt/bdp/hadoop/1.hql"},
-    "labels": {
-        "engineType": "spark-2.4.3",
-        "userCreator": "hadoop-IDE"
+  "executionContent": {
+    "code": "show tables",
+    "runType": "sql"
+  },
+  "params": {
+    "variable": {// task variable 
+      "testvar": "hello"
+    },
+    "configuration": {
+      "runtime": {// task runtime params 
+        "jdbc.url": "XX"
+      },
+      "startup": { // ec start up params 
+        "spark.executor.cores": "4"
+      }
     }
+  },
+  "source": { //task source information
+    "scriptPath": "file:///tmp/hadoop/test.sql"
+  },
+  "labels": {
+    "engineType": "spark-2.4.3",
+    "userCreator": "hadoop-IDE"
+  }
 }
 ```
 
-
 - 返回示例
 
 ```json
 {
- "method": "/api/rest_j/v1/entrance/execute",
+ "method": "/api/rest_j/v1/entrance/submit",
  "status": 0,
  "message": "请求执行成功",
  "data": {
@@ -158,6 +159,8 @@
 
 - 提交方式 `GET`
 
+- 返回示例
+
 ```json
 {
  "method": "/api/rest_j/v1/entrance/{execID}/kill",
@@ -169,3 +172,229 @@
 }
 ```
 
+### 6. 获取任务信息
+
+- 接口 `/api/rest_j/v1/jobhistory/{id}/get`
+
+- 提交方式 `GET`
+
+- 请求参数
+
+| 参数名称 | 参数说明 | 请求类型    | 是否必须 | 数据类型 | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|id|id|path|true|string||
+
+
+- 返回示例
+
+```json
+{
+  "method": null,
+  "status": 0,
+  "message": "OK",
+  "data": {
+    "task": {
+      "taskID": 1,
+      "instance": "xxx",
+      "execId": "exec-id-xxx",
+      "umUser": "test",
+      "engineInstance": "xxx",
+      "progress": "10%",
+      "logPath": "hdfs://xxx/xxx/xxx",
+      "resultLocation": "hdfs://xxx/xxx/xxx",
+      "status": "FAILED",
+      "createdTime": "2019-01-01 00:00:00",
+      "updatedTime": "2019-01-01 01:00:00",
+      "engineType": "spark",
+      "errorCode": 100,
+      "errDesc": "Task Failed with error code 100",
+      "executeApplicationName": "hello world",
+      "requestApplicationName": "hello world",
+      "runType": "xxx",
+      "paramJson": "{\"xxx\":\"xxx\"}",
+      "costTime": 10000,
+      "strongerExecId": "execId-xxx",
+      "sourceJson": "{\"xxx\":\"xxx\"}"
+    }
+  }
+}
+```
+
+### 7. 获取结果集信息
+
+支持多结果集信息
+
+- 接口 `/api/rest_j/v1/filesystem/getDirFileTrees`
+
+- 提交方式 `GET`
+
+- 请求参数
+
+| 参数名称 | 参数说明 | 请求类型    | 是否必须 | 数据类型 | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|path|结果集目录路径|query|true|string||
+
+
+- 返回示例
+
+```json
+{
+  "method": "/api/filesystem/getDirFileTrees",
+  "status": 0,
+  "message": "OK",
+  "data": {
+    "dirFileTrees": {
+      "name": "1946923",
+      "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923",
+      "properties": null,
+      "children": [
+        {
+          "name": "_0.dolphin",
+          "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923/_0.dolphin",//result set 1
+          "properties": {
+            "size": "7900",
+            "modifytime": "1657113288360"
+          },
+          "children": null,
+          "isLeaf": true,
+          "parentPath": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923"
+        },
+        {
+          "name": "_1.dolphin",
+          "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923/_1.dolphin",//result set 2
+          "properties": {
+            "size": "7900",
+            "modifytime": "1657113288614"
+          },
+          "children": null,
+          "isLeaf": true,
+          "parentPath": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923"
+        }
+      ],
+      "isLeaf": false,
+      "parentPath": null
+    }
+  }
+}
+```
+
+### 8. 获取结果集内容
+
+- 接口 `/api/rest_j/v1/filesystem/openFile`
+
+- 提交方式 `GET`
+
+- 请求参数
+
+| 参数名称 | 参数说明 | 请求类型    | 是否必须 | 数据类型 | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|path|结果集文件|query|true|string||
+|charset|字符集|query|false|string||
+|page|页码|query|false|ref||
+|pageSize|页面大小|query|false|ref||
+
+
+- 返回示例
+
+```json
+{
+  "method": "/api/filesystem/openFile",
+  "status": 0,
+  "message": "OK",
+  "data": {
+    "metadata": [
+      {
+        "columnName": "count(1)",
+        "comment": "NULL",
+        "dataType": "long"
+      }
+    ],
+    "totalPage": 0,
+    "totalLine": 1,
+    "page": 1,
+    "type": "2",
+    "fileContent": [
+      [
+        "28"
+      ]
+    ]
+  }
+}
+```
+
+### 9. 获取结果集按照文件流的方式
+
+获取结果集为CSV和Excel按照流的方式
+
+- 接口 `/api/rest_j/v1/filesystem/resultsetToExcel`
+
+- 提交方式 `GET`
+
+- 请求参数
+
+| 参数名称 | 参数说明 | 请求类型    | 是否必须 | 数据类型 | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|autoFormat|是否自动转换格式|query|false|boolean||
+|charset|字符集|query|false|string||
+|csvSeperator|csv分隔栏|query|false|string||
+|limit|获取行数|query|false|ref||
+|nullValue|空值转换|query|false|string||
+|outputFileName|输出文件名称|query|false|string||
+|outputFileType|输出文件类型 csv 或者Excel|query|false|string||
+|path|结果集路径|query|false|string||
+
+- 返回示例
+
+```json
+文件流
+```
+
+### 10. 兼容0.X的任务执行接口
+
+- 接口 `/api/rest_j/v1/entrance/execute`
+
+- 提交方式 `POST`
+
+```json
+{
+    "executeApplicationName": "hive", //Engine type
+    "requestApplicationName": "dss", //Client service type
+    "executionCode": "show tables",
+    "params": {
+      "variable": {// task variable 
+        "testvar": "hello"
+      },
+      "configuration": {
+        "runtime": {// task runtime params 
+          "jdbc.url": "XX"
+        },
+        "startup": { // ec start up params 
+          "spark.executor.cores": "4"
+        }
+      }
+    },
+    "source": { //task source information
+      "scriptPath": "file:///tmp/hadoop/test.sql"
+    },
+    "labels": {
+      "engineType": "spark-2.4.3",
+      "userCreator": "hadoop-IDE"
+    },
+    "runType": "hql", //The type of script to run
+    "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
+}
+```
+
+- Sample Response
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/execute",
+ "status": 0,
+ "message": "Request executed successfully",
+ "data": {
+   "execID": "030418IDEhivebdpdwc010004:10087IDE_hadoop_21",
+   "taskID": "123"
+ }
+}
+```
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/api/linkis_task_operator.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/api/linkis_task_operator.md
index 02b749fd45..0db59ebdbc 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/api/linkis_task_operator.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/api/linkis_task_operator.md
@@ -4,61 +4,62 @@
 
 ```json
 {
- "method": "",
- "status": 0,
- "message": "",
- "data": {}
+  "method": "",
+  "status": 0,
+  "message": "",
+  "data": {}
 }
 ```
 
 **约定**:
 
- - method:返回请求的Restful API URI,主要是 WebSocket 模式需要使用。
- - status:返回状态信息,其中:-1表示没有登录,0表示成功,1表示错误,2表示验证失败,3表示没该接口的访问权限。
- - data:返回具体的数据。
- - message:返回请求的提示信息。如果status非0时,message返回的是错误信息,其中data有可能存在stack字段,返回具体的堆栈信息。 
- 
+- method:返回请求的Restful API URI,主要是 WebSocket 模式需要使用。
+- status:返回状态信息,其中:-1表示没有登录,0表示成功,1表示错误,2表示验证失败,3表示没该接口的访问权限。
+- data:返回具体的数据。
+- message:返回请求的提示信息。如果status非0时,message返回的是错误信息,其中data有可能存在stack字段,返回具体的堆栈信息。
+
 更多关于 Linkis Restful 接口的规范,请参考:[Linkis Restful 接口规范](/community/development_specification/api)
 
 ### 1. 提交执行
 
-- 接口 `/api/rest_j/v1/entrance/execute`
-
-- 提交方式 `POST`
-
-```json
-{
-    "executeApplicationName": "hive", //引擎类型
-    "requestApplicationName": "dss", //客户端服务类型
-    "executionCode": "show tables",
-    "params": {"variable": {}, "configuration": {}},
-    "runType": "hql", //运行的脚本类型
-   "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
-}
-```
-
 - 接口 `/api/rest_j/v1/entrance/submit`
 
 - 提交方式 `POST`
 
 ```json
 {
-    "executionContent": {"code": "show tables", "runType":  "sql"},
-    "params": {"variable": {}, "configuration": {}},
-    "source":  {"scriptPath": "file:///mnt/bdp/hadoop/1.hql"},
-    "labels": {
-        "engineType": "spark-2.4.3",
-        "userCreator": "hadoop-IDE"
+  "executionContent": {
+    "code": "show tables",
+    "runType": "sql"
+  },
+  "params": {
+    "variable": {// task variable 
+      "testvar": "hello"
+    },
+    "configuration": {
+      "runtime": {// task runtime params 
+        "jdbc.url": "XX"
+      },
+      "startup": { // ec start up params 
+        "spark.executor.cores": "4"
+      }
     }
+  },
+  "source": { //task source information
+    "scriptPath": "file:///tmp/hadoop/test.sql"
+  },
+  "labels": {
+    "engineType": "spark-2.4.3",
+    "userCreator": "hadoop-IDE"
+  }
 }
 ```
 
-
 - 返回示例
 
 ```json
 {
- "method": "/api/rest_j/v1/entrance/execute",
+ "method": "/api/rest_j/v1/entrance/submit",
  "status": 0,
  "message": "请求执行成功",
  "data": {
@@ -158,6 +159,8 @@
 
 - 提交方式 `GET`
 
+- 返回示例
+
 ```json
 {
  "method": "/api/rest_j/v1/entrance/{execID}/kill",
@@ -169,3 +172,229 @@
 }
 ```
 
+### 6. 获取任务信息
+
+- 接口 `/api/rest_j/v1/jobhistory/{id}/get`
+
+- 提交方式 `GET`
+
+- 请求参数
+
+| 参数名称 | 参数说明 | 请求类型    | 是否必须 | 数据类型 | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|id|id|path|true|string||
+
+
+- 返回示例
+
+```json
+{
+  "method": null,
+  "status": 0,
+  "message": "OK",
+  "data": {
+    "task": {
+      "taskID": 1,
+      "instance": "xxx",
+      "execId": "exec-id-xxx",
+      "umUser": "test",
+      "engineInstance": "xxx",
+      "progress": "10%",
+      "logPath": "hdfs://xxx/xxx/xxx",
+      "resultLocation": "hdfs://xxx/xxx/xxx",
+      "status": "FAILED",
+      "createdTime": "2019-01-01 00:00:00",
+      "updatedTime": "2019-01-01 01:00:00",
+      "engineType": "spark",
+      "errorCode": 100,
+      "errDesc": "Task Failed with error code 100",
+      "executeApplicationName": "hello world",
+      "requestApplicationName": "hello world",
+      "runType": "xxx",
+      "paramJson": "{\"xxx\":\"xxx\"}",
+      "costTime": 10000,
+      "strongerExecId": "execId-xxx",
+      "sourceJson": "{\"xxx\":\"xxx\"}"
+    }
+  }
+}
+```
+
+### 7. 获取结果集信息
+
+支持多结果集信息
+
+- 接口 `/api/rest_j/v1/filesystem/getDirFileTrees`
+
+- 提交方式 `GET`
+
+- 请求参数
+
+| 参数名称 | 参数说明 | 请求类型    | 是否必须 | 数据类型 | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|path|结果集目录路径|query|true|string||
+
+
+- 返回示例
+
+```json
+{
+  "method": "/api/filesystem/getDirFileTrees",
+  "status": 0,
+  "message": "OK",
+  "data": {
+    "dirFileTrees": {
+      "name": "1946923",
+      "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923",
+      "properties": null,
+      "children": [
+        {
+          "name": "_0.dolphin",
+          "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923/_0.dolphin",//result set 1
+          "properties": {
+            "size": "7900",
+            "modifytime": "1657113288360"
+          },
+          "children": null,
+          "isLeaf": true,
+          "parentPath": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923"
+        },
+        {
+          "name": "_1.dolphin",
+          "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923/_1.dolphin",//result set 2
+          "properties": {
+            "size": "7900",
+            "modifytime": "1657113288614"
+          },
+          "children": null,
+          "isLeaf": true,
+          "parentPath": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923"
+        }
+      ],
+      "isLeaf": false,
+      "parentPath": null
+    }
+  }
+}
+```
+
+### 8. 获取结果集内容
+
+- 接口 `/api/rest_j/v1/filesystem/openFile`
+
+- 提交方式 `GET`
+
+- 请求参数
+
+| 参数名称 | 参数说明 | 请求类型    | 是否必须 | 数据类型 | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|path|结果集文件|query|true|string||
+|charset|字符集|query|false|string||
+|page|页码|query|false|ref||
+|pageSize|页面大小|query|false|ref||
+
+
+- 返回示例
+
+```json
+{
+  "method": "/api/filesystem/openFile",
+  "status": 0,
+  "message": "OK",
+  "data": {
+    "metadata": [
+      {
+        "columnName": "count(1)",
+        "comment": "NULL",
+        "dataType": "long"
+      }
+    ],
+    "totalPage": 0,
+    "totalLine": 1,
+    "page": 1,
+    "type": "2",
+    "fileContent": [
+      [
+        "28"
+      ]
+    ]
+  }
+}
+```
+
+### 9. 获取结果集按照文件流的方式
+
+获取结果集为CSV和Excel按照流的方式
+
+- 接口 `/api/rest_j/v1/filesystem/resultsetToExcel`
+
+- 提交方式 `GET`
+
+- 请求参数
+
+| 参数名称 | 参数说明 | 请求类型    | 是否必须 | 数据类型 | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|autoFormat|是否自动转换格式|query|false|boolean||
+|charset|字符集|query|false|string||
+|csvSeperator|csv分隔栏|query|false|string||
+|limit|获取行数|query|false|ref||
+|nullValue|空值转换|query|false|string||
+|outputFileName|输出文件名称|query|false|string||
+|outputFileType|输出文件类型 csv 或者Excel|query|false|string||
+|path|结果集路径|query|false|string||
+
+- 返回示例
+
+```json
+文件流
+```
+
+### 10. 兼容0.X的任务执行接口
+
+- 接口 `/api/rest_j/v1/entrance/execute`
+
+- 提交方式 `POST`
+
+```json
+{
+    "executeApplicationName": "hive", //Engine type
+    "requestApplicationName": "dss", //Client service type
+    "executionCode": "show tables",
+    "params": {
+      "variable": {// task variable 
+        "testvar": "hello"
+      },
+      "configuration": {
+        "runtime": {// task runtime params 
+          "jdbc.url": "XX"
+        },
+        "startup": { // ec start up params 
+          "spark.executor.cores": "4"
+        }
+      }
+    },
+    "source": { //task source information
+      "scriptPath": "file:///tmp/hadoop/test.sql"
+    },
+    "labels": {
+      "engineType": "spark-2.4.3",
+      "userCreator": "hadoop-IDE"
+    },
+    "runType": "hql", //The type of script to run
+    "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
+}
+```
+
+- Sample Response
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/execute",
+ "status": 0,
+ "message": "Request executed successfully",
+ "data": {
+   "execID": "030418IDEhivebdpdwc010004:10087IDE_hadoop_21",
+   "taskID": "123"
+ }
+}
+```
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/add_an_engine_conn.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/computation_governance_services/engine/add_an_engine_conn.md
similarity index 95%
rename from i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/add_an_engine_conn.md
rename to i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/computation_governance_services/engine/add_an_engine_conn.md
index 454e579894..44260f11d6 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/add_an_engine_conn.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/computation_governance_services/engine/add_an_engine_conn.md
@@ -1,4 +1,4 @@
-# EngineConn 新增流程
+# EngineConn 启动流程
 
 EngineConn的新增,是Linkis计算治理的计算任务准备阶段的核心流程之一。它主要包括了Client端(Entrance或用户客户端)向LinkisManager发起一个新增EngineConn的请求,LinkisManager为用户按需、按标签规则,向EngineConnManager发起一个启动EngineConn的请求,并等待EngineConn启动完成后,将可用的EngineConn返回给Client的整个流程。
 
@@ -11,11 +11,11 @@ EngineConn的新增,是Linkis计算治理的计算任务准备阶段的核心
 **名词解释**:
 
 - LinkisManager:是Linkis计算治理能力的管理中枢,主要的职责为:
-  1. 基于多级组合标签,为用户提供经过复杂路由、资源管控和负载均衡后的可用EngineConn;
-  
-  2. 提供EC和ECM的全生命周期管理能力;
-  
-  3. 为用户提供基于多级组合标签的多Yarn集群资源管理功能。主要分为 AppManager(应用管理器)、ResourceManager(资源管理器)、LabelManager(标签管理器)三大模块,能够支持多活部署,具备高可用、易扩展的特性。
+    1. 基于多级组合标签,为用户提供经过复杂路由、资源管控和负载均衡后的可用EngineConn;
+
+    2. 提供EC和ECM的全生命周期管理能力;
+
+    3. 为用户提供基于多级组合标签的多Yarn集群资源管理功能。主要分为 AppManager(应用管理器)、ResourceManager(资源管理器)、LabelManager(标签管理器)三大模块,能够支持多活部署,具备高可用、易扩展的特性。
 
 &nbsp;&nbsp;&nbsp;&nbsp;AM模块接收到Client的新增EngineConn请求后,首先会对请求做参数校验,判断请求参数的合法性;其次是通过复杂规则选中一台最合适的EngineConnManager(ECM),以用于后面的EngineConn启动;接下来会向RM申请启动该EngineConn需要的资源;最后是向ECM请求创建EngineConn。
 
@@ -34,7 +34,7 @@ EngineConn的新增,是Linkis计算治理的计算任务准备阶段的核心
 1. 在获取到分配的ECM后,AM接着会通过调用EngineConnPluginServer服务请求本次客户端的引擎创建请求会使用多少的资源,这里会通过封装资源请求,主要包含Label、Client传递过来的EngineConn的启动参数、以及从Configuration模块获取到用户配置参数,通过RPC调用ECP服务去获取本次的资源信息。
 
 2. EngineConnPluginServer服务在接收到资源请求后,会先通过传递过来的标签找到对应的引擎标签,通过引擎标签选择对应引擎的EngineConnPlugin。然后通过EngineConnPlugin的资源生成器,对客户端传入的引擎启动参数进行计算,算出本次申请新EngineConn所需的资源,然后返回给LinkisManager。
-   
+
    **名词解释:**
 - EgineConnPlugin:是Linkis对接一个新的计算存储引擎必须要实现的接口,该接口主要包含了这种EngineConn在启动过程中必须提供的几个接口能力,包括EngineConn资源生成器、EngineConn启动命令生成器、EngineConn引擎连接器。具体的实现可以参考Spark引擎的实现类:[SparkEngineConnPlugin](https://github.com/apache/incubator-linkis/blob/master/linkis-engineconn-plugins/engineconn-plugins/spark/src/main/scala/com/webank/wedatasphere/linkis/engineplugin/spark/SparkEngineConnPlugin.scala)。
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/proxy_user.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/computation_governance_services/engine/proxy_user.md
similarity index 97%
rename from i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/proxy_user.md
rename to i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/computation_governance_services/engine/proxy_user.md
index 1b76b44af3..8e865edf83 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/proxy_user.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/computation_governance_services/engine/proxy_user.md
@@ -1,5 +1,5 @@
 ---
-title: 代理用户模式
+title: Linkis支持代理用户提交架构涉及
 sidebar_position: 2
 ---
 
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/job_submission_preparation_and_execution_process.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/computation_governance_services/job_submission_preparation_and_execution_process.md
similarity index 98%
rename from i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/job_submission_preparation_and_execution_process.md
rename to i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/computation_governance_services/job_submission_preparation_and_execution_process.md
index d0a8dcfc82..833acc99f4 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/job_submission_preparation_and_execution_process.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.1.2/architecture/computation_governance_services/job_submission_preparation_and_execution_process.md
@@ -1,5 +1,5 @@
 ---
-title: Job 提交准备执行流程
+title: Linkis任务执行流程
 sidebar_position: 1
 ---
 
@@ -16,13 +16,13 @@ sidebar_position: 1
 - Orchestrator作为准备阶段的入口,主要提供了Job的解析、编排和执行能力。。
 
 - Linkis Manager:是计算治理能力的管理中枢,主要的职责为:
-  
+
   1. ResourceManager:不仅具备对Yarn和Linkis EngineConnManager的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让ResourceManager具备跨集群、跨计算资源类型的全资源管理能力;
-  
+
   2. AppManager:统筹管理所有的EngineConnManager和EngineConn,包括EngineConn的申请、复用、创建、切换、销毁等生命周期全交予AppManager进行管理;
-  
+
   3. LabelManager:将基于多级组合标签,为跨IDC、跨集群的EngineConn和EngineConnManager路由和管控能力提供标签支持;
-  
+
   4. EngineConnPluginServer:对外提供启动一个EngineConn的所需资源生成能力和EngineConn的启动命令生成能力。
 
 - EngineConnManager:是EngineConn的管理器,提供引擎的生命周期管理,同时向RM汇报负载信息和自身的健康状况。
@@ -38,7 +38,7 @@ sidebar_position: 1
 ![提交阶段流程图](/Images-zh/Architecture/Job提交准备执行流程/提交阶段流程图.png)
 
 1. 首先,Client(如前端或客户端)发起Job请求,Job请求信息精简如下
-(关于Linkis的具体使用方式,请参考 [如何使用Linkis](user_guide/how_to_use.md)):
+   (关于Linkis的具体使用方式,请参考 [如何使用Linkis](user_guide/how_to_use.md)):
 
 ```
 POST /api/rest_j/v1/entrance/submit
@@ -74,7 +74,7 @@ POST /api/rest_j/v1/entrance/submit
 
 如何定义可复用EngineConn?指能匹配计算任务的所有标签要求的,且EngineConn本身健康状态为Healthy(负载低且实际EngineConn状态为Idle)的,然后再按规则对所有满足条件的EngineConn进行排序选择,最终锁定一个最佳的EngineConn。
 
-如果该用户不存在可复用的EngineConn,则此时会触发EngineConn新增流程,关于EngineConn新增流程,请参阅:[EngineConn新增流程](add_an_engine_conn.md) 。
+如果该用户不存在可复用的EngineConn,则此时会触发EngineConn新增流程,关于EngineConn新增流程,请参阅:[EngineConn新增流程](engine/add_an_engine_conn.md) 。
 
 #### 2.2 计算任务编排
 
diff --git a/static/Images/Architecture/Commons/var_arc.png b/static/Images/Architecture/Commons/var_arc.png
new file mode 100644
index 0000000000..a8c10bd4c6
Binary files /dev/null and b/static/Images/Architecture/Commons/var_arc.png differ
diff --git a/versioned_docs/version-1.1.1/api/linkis_task_operator.md b/versioned_docs/version-1.1.1/api/linkis_task_operator.md
index 203fa570ba..815c91477d 100644
--- a/versioned_docs/version-1.1.1/api/linkis_task_operator.md
+++ b/versioned_docs/version-1.1.1/api/linkis_task_operator.md
@@ -9,61 +9,65 @@ sidebar_position: 2
 
 ```json
 {
- "method": "",
- "status": 0,
- "message": "",
- "data": {}
+  "method": "",
+  "status": 0,
+  "message": "",
+  "data": {}
 }
 ```
 
 **Convention**:
 
- - method: Returns the requested Restful API URI, which is mainly used in WebSocket mode.
- - status: return status information, where: -1 means no login, 0 means success, 1 means error, 2 means verification failed, 3 means no access to the interface.
- - data: return specific data.
- - message: return the requested prompt message. If the status is not 0, the message returned is an error message, and the data may have a stack field, which returns specific stack information.
- 
-For more information about the Linkis Restful interface specification, please refer to: [Linkis Restful Interface Specification](/community/development_specification/api)
-
-### 1. Submit for Execution
+- method: Returns the requested Restful API URI, which is mainly used in WebSocket mode.
+- status: return status information, where: -1 means no login, 0 means success, 1 means error, 2 means verification failed, 3 means no access to the interface.
+- data: return specific data.
+- message: return the requested prompt message. If the status is not 0, the message returned is an error message, and the data may have a stack field, which returns specific stack information.
 
-- Interface `/api/rest_j/v1/entrance/execute`
+For more information about the Linkis Restful interface specification, please refer to: [Linkis Restful Interface Specification](/community/development_specification/api)
 
-- Submission method `POST`
+### 1. Submit task
 
-```json
-{
-    "executeApplicationName": "hive", //Engine type
-    "requestApplicationName": "dss", //Client service type
-    "executionCode": "show tables",
-    "params": {"variable": {}, "configuration": {}},
-    "runType": "hql", //The type of script to run
-    "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
-}
-```
 
 - Interface `/api/rest_j/v1/entrance/submit`
 
 - Submission method `POST`
 
+- Request Parameters
+
 ```json
 {
-    "executionContent": {"code": "show tables", "runType": "sql"},
-    "params": {"variable": {}, "configuration": {}},
-    "source": {"scriptPath": "file:///mnt/bdp/hadoop/1.hql"},
-    "labels": {
-        "engineType": "spark-2.4.3",
-        "userCreator": "hadoop-IDE"
+  "executionContent": {
+    "code": "show tables",
+    "runType": "sql"
+  },
+  "params": {
+    "variable": {// task variable 
+      "testvar": "hello" 
+    },
+    "configuration": {
+      "runtime": {// task runtime params 
+        "jdbc.url": "XX"
+      },
+      "startup": { // ec start up params 
+        "spark.executor.cores": "4"
+      }
     }
+  },
+  "source": { //task source information
+    "scriptPath": "file:///tmp/hadoop/test.sql"
+  },
+  "labels": {
+    "engineType": "spark-2.4.3",
+    "userCreator": "hadoop-IDE"
+  }
 }
 ```
 
-
--Return to example
+-Sample Response
 
 ```json
 {
- "method": "/api/rest_j/v1/entrance/execute",
+ "method": "/api/rest_j/v1/entrance/submit",
  "status": 0,
  "message": "Request executed successfully",
  "data": {
@@ -84,7 +88,7 @@ For more information about the Linkis Restful interface specification, please re
 
 - Submission method `GET`
 
-- Return to example
+- Sample Response
 
 ```json
 {
@@ -106,7 +110,7 @@ For more information about the Linkis Restful interface specification, please re
 
 - The request parameter fromLine refers to the number of lines from which to get, and size refers to the number of lines of logs that this request gets
 
-- Return example, where the returned fromLine needs to be used as a parameter for the next request of this interface
+- Sample Response, where the returned fromLine needs to be used as a parameter for the next request of this interface
 
 ```json
 {
@@ -121,38 +125,47 @@ For more information about the Linkis Restful interface specification, please re
 }
 ```
 
-### 4. Get Progress
+### 4. Get Progress and resource
 
-- Interface `/api/rest_j/v1/entrance/${execID}/progress`
+- Interface `/api/rest_j/v1/entrance/${execID}/progressWithResource`
 
 - Submission method `GET`
 
-- Return to example
+- Sample Response
 
 ```json
 {
-  "method": "/api/rest_j/v1/entrance/{execID}/progress",
+  "method": "/api/entrance/exec_id018017linkis-cg-entrance127.0.0.1:9205IDE_hadoop_spark_2/progressWithResource",
   "status": 0,
-  "message": "Return progress information",
+  "message": "OK",
   "data": {
-    "execID": "${execID}",
-    "progress": 0.2,
-    "progressInfo": [
+    "yarnMetrics": {
+      "yarnResource": [
         {
-        "id": "job-1",
-        "succeedTasks": 2,
-        "failedTasks": 0,
-        "runningTasks": 5,
-        "totalTasks": 10
-        },
-        {
-        "id": "job-2",
-        "succeedTasks": 5,
-        "failedTasks": 0,
-        "runningTasks": 5,
-        "totalTasks": 10
+          "queueMemory": 9663676416,
+          "queueCores": 6,
+          "queueInstances": 0,
+          "jobStatus": "COMPLETED",
+          "applicationId": "application_1655364300926_69504",
+          "queue": "default"
         }
-    ]
+      ],
+      "memoryPercent": 0.009,
+      "memoryRGB": "green",
+      "coreRGB": "green",
+      "corePercent": 0.02
+    },
+    "progress": 0.5,
+    "progressInfo": [
+      {
+        "succeedTasks": 4,
+        "failedTasks": 0,
+        "id": "jobId-1(linkis-spark-mix-code-1946915)",
+        "totalTasks": 6,
+        "runningTasks": 0
+      }
+    ],
+    "execID": "exec_id018017linkis-cg-entrance127.0.0.1:9205IDE_hadoop_spark_2"
   }
 }
 ```
@@ -163,6 +176,8 @@ For more information about the Linkis Restful interface specification, please re
 
 - Submission method `POST`
 
+- Sample Response
+
 ```json
 {
  "method": "/api/rest_j/v1/entrance/{execID}/kill",
@@ -172,4 +187,240 @@ For more information about the Linkis Restful interface specification, please re
    "execID":"${execID}"
   }
 }
+```
+
+### 6. Get task info
+
+- Interface `/api/rest_j/v1/jobhistory/{id}/get`
+
+- Submission method `GET`
+
+**Request Parameters**:
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|id|task id|path|true|string||
+
+
+- Sample Response
+
+````json
+{
+    "method": null,
+    "status": 0,
+    "message": "OK",
+    "data": {
+        "task": {
+                "taskID": 1,
+                "instance": "xxx",
+                "execId": "exec-id-xxx",
+                "umUser": "test",
+                "engineInstance": "xxx",
+                "progress": "10%",
+                "logPath": "hdfs://xxx/xxx/xxx",
+                "resultLocation": "hdfs://xxx/xxx/xxx",
+                "status": "FAILED",
+                "createdTime": "2019-01-01 00:00:00",
+                "updatedTime": "2019-01-01 01:00:00",
+                "engineType": "spark",
+                "errorCode": 100,
+                "errDesc": "Task Failed with error code 100",
+                "executeApplicationName": "hello world",
+                "requestApplicationName": "hello world",
+                "runType": "xxx",
+                "paramJson": "{\"xxx\":\"xxx\"}",
+                "costTime": 10000,
+                "strongerExecId": "execId-xxx",
+                "sourceJson": "{\"xxx\":\"xxx\"}"
+        }
+    }
+}
+````
+
+### 7. Get result set info
+
+Support for multiple result sets
+
+- Interface `/api/rest_j/v1/filesystem/getDirFileTrees`
+
+- Submission method `GET`
+
+**Request Parameters**:
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|path|result directory |query|true|string||
+
+
+- Sample Response
+
+````json
+{
+  "method": "/api/filesystem/getDirFileTrees",
+  "status": 0,
+  "message": "OK",
+  "data": {
+    "dirFileTrees": {
+      "name": "1946923",
+      "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923",
+      "properties": null,
+      "children": [
+        {
+          "name": "_0.dolphin",
+          "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923/_0.dolphin",//result set 1
+          "properties": {
+            "size": "7900",
+            "modifytime": "1657113288360"
+          },
+          "children": null,
+          "isLeaf": true,
+          "parentPath": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923"
+        },
+        {
+          "name": "_1.dolphin",
+          "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923/_1.dolphin",//result set 2
+          "properties": {
+            "size": "7900",
+            "modifytime": "1657113288614"
+          },
+          "children": null,
+          "isLeaf": true,
+          "parentPath": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923"
+        }
+      ],
+      "isLeaf": false,
+      "parentPath": null
+    }
+  }
+}
+````
+
+### 8. Get result content
+
+- Interface `/api/rest_j/v1/filesystem/openFile`
+
+- Submission method `GET`
+
+**Request Parameters**:
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|path|result path|query|true|string||
+|charset|Charset|query|false|string||
+|page|page number|query|false|ref||
+|pageSize|page size|query|false|ref||
+
+
+- Sample Response
+
+````json
+{
+  "method": "/api/filesystem/openFile",
+  "status": 0,
+  "message": "OK",
+  "data": {
+    "metadata": [
+      {
+        "columnName": "count(1)",
+        "comment": "NULL",
+        "dataType": "long"
+      }
+    ],
+    "totalPage": 0,
+    "totalLine": 1,
+    "page": 1,
+    "type": "2",
+    "fileContent": [
+      [
+        "28"
+      ]
+    ]
+  }
+}
+````
+
+
+### 9. Get Result by stream
+
+Get the result as a CSV or Excel file
+
+- Interface `/api/rest_j/v1/filesystem/resultsetToExcel`
+
+- Submission method `GET`
+
+**Request Parameters**:
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|autoFormat|Auto |query|false|boolean||
+|charset|charset|query|false|string||
+|csvSeerator|csv Separator|query|false|string||
+|limit|row limit|query|false|ref||
+|nullValue|null value|query|false|string||
+|outputFileName|Output file name|query|false|string||
+|outputFileType|Output file type csv or excel|query|false|string||
+|path|result path|query|false|string||
+|quoteRetouchEnable| Whether to quote modification|query|false|boolean||
+|sheetName|sheet name|query|false|string||
+
+
+- Response
+
+````json
+binary stream
+````
+
+
+### 10. Compatible with 0.x task submission interface
+
+- Interface `/api/rest_j/v1/entrance/execute`
+
+- Submission method `POST`
+
+
+- Request Parameters
+
+
+```json
+{
+    "executeApplicationName": "hive", //Engine type
+    "requestApplicationName": "dss", //Client service type
+    "executionCode": "show tables",
+    "params": {
+      "variable": {// task variable 
+        "testvar": "hello"
+      },
+      "configuration": {
+        "runtime": {// task runtime params 
+          "jdbc.url": "XX"
+        },
+        "startup": { // ec start up params 
+          "spark.executor.cores": "4"
+        }
+      }
+    },
+    "source": { //task source information
+      "scriptPath": "file:///tmp/hadoop/test.sql"
+    },
+    "labels": {
+      "engineType": "spark-2.4.3",
+      "userCreator": "hadoop-IDE"
+    },
+    "runType": "hql", //The type of script to run
+    "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
+}
+```
+
+- Sample Response
+
+```json
+{
+  "method": "/api/rest_j/v1/entrance/execute",
+  "status": 0,
+  "message": "Request executed successfully",
+  "data": {
+    "execID": "030418IDEhivebdpdwc010004:10087IDE_hadoop_21",
+    "taskID": "123"
+  }
+}
 ```
\ No newline at end of file
diff --git a/versioned_docs/version-1.1.2/api/linkis_task_operator.md b/versioned_docs/version-1.1.2/api/linkis_task_operator.md
index 203fa570ba..815c91477d 100644
--- a/versioned_docs/version-1.1.2/api/linkis_task_operator.md
+++ b/versioned_docs/version-1.1.2/api/linkis_task_operator.md
@@ -9,61 +9,65 @@ sidebar_position: 2
 
 ```json
 {
- "method": "",
- "status": 0,
- "message": "",
- "data": {}
+  "method": "",
+  "status": 0,
+  "message": "",
+  "data": {}
 }
 ```
 
 **Convention**:
 
- - method: Returns the requested Restful API URI, which is mainly used in WebSocket mode.
- - status: return status information, where: -1 means no login, 0 means success, 1 means error, 2 means verification failed, 3 means no access to the interface.
- - data: return specific data.
- - message: return the requested prompt message. If the status is not 0, the message returned is an error message, and the data may have a stack field, which returns specific stack information.
- 
-For more information about the Linkis Restful interface specification, please refer to: [Linkis Restful Interface Specification](/community/development_specification/api)
-
-### 1. Submit for Execution
+- method: Returns the requested Restful API URI, which is mainly used in WebSocket mode.
+- status: return status information, where: -1 means no login, 0 means success, 1 means error, 2 means verification failed, 3 means no access to the interface.
+- data: return specific data.
+- message: return the requested prompt message. If the status is not 0, the message returned is an error message, and the data may have a stack field, which returns specific stack information.
 
-- Interface `/api/rest_j/v1/entrance/execute`
+For more information about the Linkis Restful interface specification, please refer to: [Linkis Restful Interface Specification](/community/development_specification/api)
 
-- Submission method `POST`
+### 1. Submit task
 
-```json
-{
-    "executeApplicationName": "hive", //Engine type
-    "requestApplicationName": "dss", //Client service type
-    "executionCode": "show tables",
-    "params": {"variable": {}, "configuration": {}},
-    "runType": "hql", //The type of script to run
-    "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
-}
-```
 
 - Interface `/api/rest_j/v1/entrance/submit`
 
 - Submission method `POST`
 
+- Request Parameters
+
 ```json
 {
-    "executionContent": {"code": "show tables", "runType": "sql"},
-    "params": {"variable": {}, "configuration": {}},
-    "source": {"scriptPath": "file:///mnt/bdp/hadoop/1.hql"},
-    "labels": {
-        "engineType": "spark-2.4.3",
-        "userCreator": "hadoop-IDE"
+  "executionContent": {
+    "code": "show tables",
+    "runType": "sql"
+  },
+  "params": {
+    "variable": {// task variable 
+      "testvar": "hello" 
+    },
+    "configuration": {
+      "runtime": {// task runtime params 
+        "jdbc.url": "XX"
+      },
+      "startup": { // ec start up params 
+        "spark.executor.cores": "4"
+      }
     }
+  },
+  "source": { //task source information
+    "scriptPath": "file:///tmp/hadoop/test.sql"
+  },
+  "labels": {
+    "engineType": "spark-2.4.3",
+    "userCreator": "hadoop-IDE"
+  }
 }
 ```
 
-
--Return to example
+-Sample Response
 
 ```json
 {
- "method": "/api/rest_j/v1/entrance/execute",
+ "method": "/api/rest_j/v1/entrance/submit",
  "status": 0,
  "message": "Request executed successfully",
  "data": {
@@ -84,7 +88,7 @@ For more information about the Linkis Restful interface specification, please re
 
 - Submission method `GET`
 
-- Return to example
+- Sample Response
 
 ```json
 {
@@ -106,7 +110,7 @@ For more information about the Linkis Restful interface specification, please re
 
 - The request parameter fromLine refers to the number of lines from which to get, and size refers to the number of lines of logs that this request gets
 
-- Return example, where the returned fromLine needs to be used as a parameter for the next request of this interface
+- Sample Response, where the returned fromLine needs to be used as a parameter for the next request of this interface
 
 ```json
 {
@@ -121,38 +125,47 @@ For more information about the Linkis Restful interface specification, please re
 }
 ```
 
-### 4. Get Progress
+### 4. Get Progress and resource
 
-- Interface `/api/rest_j/v1/entrance/${execID}/progress`
+- Interface `/api/rest_j/v1/entrance/${execID}/progressWithResource`
 
 - Submission method `GET`
 
-- Return to example
+- Sample Response
 
 ```json
 {
-  "method": "/api/rest_j/v1/entrance/{execID}/progress",
+  "method": "/api/entrance/exec_id018017linkis-cg-entrance127.0.0.1:9205IDE_hadoop_spark_2/progressWithResource",
   "status": 0,
-  "message": "Return progress information",
+  "message": "OK",
   "data": {
-    "execID": "${execID}",
-    "progress": 0.2,
-    "progressInfo": [
+    "yarnMetrics": {
+      "yarnResource": [
         {
-        "id": "job-1",
-        "succeedTasks": 2,
-        "failedTasks": 0,
-        "runningTasks": 5,
-        "totalTasks": 10
-        },
-        {
-        "id": "job-2",
-        "succeedTasks": 5,
-        "failedTasks": 0,
-        "runningTasks": 5,
-        "totalTasks": 10
+          "queueMemory": 9663676416,
+          "queueCores": 6,
+          "queueInstances": 0,
+          "jobStatus": "COMPLETED",
+          "applicationId": "application_1655364300926_69504",
+          "queue": "default"
         }
-    ]
+      ],
+      "memoryPercent": 0.009,
+      "memoryRGB": "green",
+      "coreRGB": "green",
+      "corePercent": 0.02
+    },
+    "progress": 0.5,
+    "progressInfo": [
+      {
+        "succeedTasks": 4,
+        "failedTasks": 0,
+        "id": "jobId-1(linkis-spark-mix-code-1946915)",
+        "totalTasks": 6,
+        "runningTasks": 0
+      }
+    ],
+    "execID": "exec_id018017linkis-cg-entrance127.0.0.1:9205IDE_hadoop_spark_2"
   }
 }
 ```
@@ -163,6 +176,8 @@ For more information about the Linkis Restful interface specification, please re
 
 - Submission method `POST`
 
+- Sample Response
+
 ```json
 {
  "method": "/api/rest_j/v1/entrance/{execID}/kill",
@@ -172,4 +187,240 @@ For more information about the Linkis Restful interface specification, please re
    "execID":"${execID}"
   }
 }
+```
+
+### 6. Get task info
+
+- Interface `/api/rest_j/v1/jobhistory/{id}/get`
+
+- Submission method `GET`
+
+**Request Parameters**:
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|id|task id|path|true|string||
+
+
+- Sample Response
+
+````json
+{
+    "method": null,
+    "status": 0,
+    "message": "OK",
+    "data": {
+        "task": {
+                "taskID": 1,
+                "instance": "xxx",
+                "execId": "exec-id-xxx",
+                "umUser": "test",
+                "engineInstance": "xxx",
+                "progress": "10%",
+                "logPath": "hdfs://xxx/xxx/xxx",
+                "resultLocation": "hdfs://xxx/xxx/xxx",
+                "status": "FAILED",
+                "createdTime": "2019-01-01 00:00:00",
+                "updatedTime": "2019-01-01 01:00:00",
+                "engineType": "spark",
+                "errorCode": 100,
+                "errDesc": "Task Failed with error code 100",
+                "executeApplicationName": "hello world",
+                "requestApplicationName": "hello world",
+                "runType": "xxx",
+                "paramJson": "{\"xxx\":\"xxx\"}",
+                "costTime": 10000,
+                "strongerExecId": "execId-xxx",
+                "sourceJson": "{\"xxx\":\"xxx\"}"
+        }
+    }
+}
+````
+
+### 7. Get result set info
+
+Support for multiple result sets
+
+- Interface `/api/rest_j/v1/filesystem/getDirFileTrees`
+
+- Submission method `GET`
+
+**Request Parameters**:
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|path|result directory |query|true|string||
+
+
+- Sample Response
+
+````json
+{
+  "method": "/api/filesystem/getDirFileTrees",
+  "status": 0,
+  "message": "OK",
+  "data": {
+    "dirFileTrees": {
+      "name": "1946923",
+      "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923",
+      "properties": null,
+      "children": [
+        {
+          "name": "_0.dolphin",
+          "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923/_0.dolphin",//result set 1
+          "properties": {
+            "size": "7900",
+            "modifytime": "1657113288360"
+          },
+          "children": null,
+          "isLeaf": true,
+          "parentPath": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923"
+        },
+        {
+          "name": "_1.dolphin",
+          "path": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923/_1.dolphin",//result set 2
+          "properties": {
+            "size": "7900",
+            "modifytime": "1657113288614"
+          },
+          "children": null,
+          "isLeaf": true,
+          "parentPath": "hdfs:///tmp/hadoop/linkis/2022-07-06/211446/IDE/1946923"
+        }
+      ],
+      "isLeaf": false,
+      "parentPath": null
+    }
+  }
+}
+````
+
+### 8. Get result content
+
+- Interface `/api/rest_j/v1/filesystem/openFile`
+
+- Submission method `GET`
+
+**Request Parameters**:
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|path|result path|query|true|string||
+|charset|Charset|query|false|string||
+|page|page number|query|false|ref||
+|pageSize|page size|query|false|ref||
+
+
+- Sample Response
+
+````json
+{
+  "method": "/api/filesystem/openFile",
+  "status": 0,
+  "message": "OK",
+  "data": {
+    "metadata": [
+      {
+        "columnName": "count(1)",
+        "comment": "NULL",
+        "dataType": "long"
+      }
+    ],
+    "totalPage": 0,
+    "totalLine": 1,
+    "page": 1,
+    "type": "2",
+    "fileContent": [
+      [
+        "28"
+      ]
+    ]
+  }
+}
+````
+
+
+### 9. Get Result by stream
+
+Get the result as a CSV or Excel file
+
+- Interface `/api/rest_j/v1/filesystem/resultsetToExcel`
+
+- Submission method `GET`
+
+**Request Parameters**:
+
+| Parameter name | Parameter description | Request type | Required | Data type | schema |
+| -------- | -------- | ----- | -------- | -------- | ------ |
+|autoFormat|Auto |query|false|boolean||
+|charset|charset|query|false|string||
+|csvSeerator|csv Separator|query|false|string||
+|limit|row limit|query|false|ref||
+|nullValue|null value|query|false|string||
+|outputFileName|Output file name|query|false|string||
+|outputFileType|Output file type csv or excel|query|false|string||
+|path|result path|query|false|string||
+|quoteRetouchEnable| Whether to quote modification|query|false|boolean||
+|sheetName|sheet name|query|false|string||
+
+
+- Response
+
+````json
+binary stream
+````
+
+
+### 10. Compatible with 0.x task submission interface
+
+- Interface `/api/rest_j/v1/entrance/execute`
+
+- Submission method `POST`
+
+
+- Request Parameters
+
+
+```json
+{
+    "executeApplicationName": "hive", //Engine type
+    "requestApplicationName": "dss", //Client service type
+    "executionCode": "show tables",
+    "params": {
+      "variable": {// task variable 
+        "testvar": "hello"
+      },
+      "configuration": {
+        "runtime": {// task runtime params 
+          "jdbc.url": "XX"
+        },
+        "startup": { // ec start up params 
+          "spark.executor.cores": "4"
+        }
+      }
+    },
+    "source": { //task source information
+      "scriptPath": "file:///tmp/hadoop/test.sql"
+    },
+    "labels": {
+      "engineType": "spark-2.4.3",
+      "userCreator": "hadoop-IDE"
+    },
+    "runType": "hql", //The type of script to run
+    "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
+}
+```
+
+- Sample Response
+
+```json
+{
+  "method": "/api/rest_j/v1/entrance/execute",
+  "status": 0,
+  "message": "Request executed successfully",
+  "data": {
+    "execID": "030418IDEhivebdpdwc010004:10087IDE_hadoop_21",
+    "taskID": "123"
+  }
+}
 ```
\ No newline at end of file
diff --git a/docs/architecture/add_an_engine_conn.md b/versioned_docs/version-1.1.2/architecture/computation_governance_services/engine/add_an_engine_conn.md
similarity index 99%
rename from docs/architecture/add_an_engine_conn.md
rename to versioned_docs/version-1.1.2/architecture/computation_governance_services/engine/add_an_engine_conn.md
index b69da6ef2a..b22e51cac4 100644
--- a/docs/architecture/add_an_engine_conn.md
+++ b/versioned_docs/version-1.1.2/architecture/computation_governance_services/engine/add_an_engine_conn.md
@@ -1,5 +1,5 @@
 ---
-title: Add an EngineConn
+title: Start an EngineConn
 sidebar_position: 3
 ---
 # How to add an EngineConn
diff --git a/versioned_docs/version-1.1.2/architecture/job_submission_preparation_and_execution_process.md b/versioned_docs/version-1.1.2/architecture/computation_governance_services/job_submission_preparation_and_execution_process.md
similarity index 99%
rename from versioned_docs/version-1.1.2/architecture/job_submission_preparation_and_execution_process.md
rename to versioned_docs/version-1.1.2/architecture/computation_governance_services/job_submission_preparation_and_execution_process.md
index dd6e8436c9..1eab642967 100644
--- a/versioned_docs/version-1.1.2/architecture/job_submission_preparation_and_execution_process.md
+++ b/versioned_docs/version-1.1.2/architecture/computation_governance_services/job_submission_preparation_and_execution_process.md
@@ -62,7 +62,7 @@ If the user has a reusable EngineConn in LinkisManager, the EngineConn is direct
 
 How to define a reusable EngineConn? It refers to those that can match all the label requirements of the computing task, and the EngineConn's own health status is Healthy (the load is low and the actual status is Idle). Then, all the EngineConn that meets the conditions are sorted and selected according to the rules, and finally the best one is locked.
 
-If the user does not have a reusable EngineConn, a process to request a new EngineConn will be triggered at this time. Regarding the process, please refer to: [How to add an EngineConn](add_an_engine_conn.md).
+If the user does not have a reusable EngineConn, a process to request a new EngineConn will be triggered at this time. Regarding the process, please refer to: [How to add an EngineConn](engine/add_an_engine_conn.md).
 
 #### 2.2 Orchestrate a computing task
 
diff --git a/versioned_docs/version-1.1.2/architecture/computation_governance_services/linkis-cli.md b/versioned_docs/version-1.1.2/architecture/computation_governance_services/linkis-cli.md
index 8951033d50..66005d8bd2 100644
--- a/versioned_docs/version-1.1.2/architecture/computation_governance_services/linkis-cli.md
+++ b/versioned_docs/version-1.1.2/architecture/computation_governance_services/linkis-cli.md
@@ -1,6 +1,6 @@
 ---
 title: Linkis-Client Architecture Design
-sidebar_position: 4
+sidebar_position: 3
 ---
 
 Provide users with a lightweight client that submits tasks to Linkis for execution.
diff --git a/docs/architecture/proxy_user.md b/versioned_docs/version-1.1.2/architecture/computation_governance_services/proxy_user.md
similarity index 98%
rename from docs/architecture/proxy_user.md
rename to versioned_docs/version-1.1.2/architecture/computation_governance_services/proxy_user.md
index 9f290e99aa..c6336ec547 100644
--- a/docs/architecture/proxy_user.md
+++ b/versioned_docs/version-1.1.2/architecture/computation_governance_services/proxy_user.md
@@ -1,6 +1,6 @@
 ---
 title: Proxy User Mode
-sidebar_position: 2
+sidebar_position: 4
 ---
 
 ## 1 Background


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org