You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@linkis.apache.org by pe...@apache.org on 2021/10/21 08:59:21 UTC

[incubator-linkis-website] 16/43: add some docs and faq

This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git

commit ef901667dfc25db2e989814ea3e382fe36c4bbc5
Author: casionone <ca...@gmail.com>
AuthorDate: Mon Oct 11 20:39:54 2021 +0800

    add some docs and faq
---
 Linkis-Doc-master/LANGS.md                         |   2 +
 Linkis-Doc-master/README.md                        | 114 ++++++
 Linkis-Doc-master/README_CN.md                     | 105 ++++++
 .../en_US/API_Documentations/JDBC_API_Document.md  |  45 +++
 ...sk_submission_and_execution_RestAPI_document.md | 170 +++++++++
 .../en_US/API_Documentations/Login_API.md          | 125 +++++++
 .../en_US/API_Documentations/README.md             |   8 +
 .../EngineConn/README.md                           |  99 +++++
 .../EngineConnManager/Images/ECM-01.png            | Bin 0 -> 34340 bytes
 .../EngineConnManager/Images/ECM-02.png            | Bin 0 -> 25340 bytes
 .../EngineConnManager/README.md                    |  45 +++
 .../EngineConnPlugin/README.md                     |  68 ++++
 .../LinkisManager/AppManager.md                    |  33 ++
 .../LinkisManager/LabelManager.md                  |  38 ++
 .../LinkisManager/README.md                        |  41 +++
 .../LinkisManager/ResourceManager.md               | 132 +++++++
 .../Computation_Governance_Services/README.md      |  40 +++
 .../DifferenceBetween1.0&0.x.md                    |  50 +++
 .../How_to_add_an_EngineConn.md                    | 105 ++++++
 ...submission_preparation_and_execution_process.md | 138 +++++++
 .../Microservice_Governance_Services/Gateway.md    |  34 ++
 .../Microservice_Governance_Services/README.md     |  32 ++
 .../Public_Enhancement_Services/BML.md             |  93 +++++
 .../ContextService/ContextService_Cache.md         |  95 +++++
 .../ContextService/ContextService_Client.md        |  61 ++++
 .../ContextService/ContextService_HighAvailable.md |  86 +++++
 .../ContextService/ContextService_Listener.md      |  33 ++
 .../ContextService/ContextService_Persistence.md   |   8 +
 .../ContextService/ContextService_Search.md        | 127 +++++++
 .../ContextService/ContextService_Service.md       |  53 +++
 .../ContextService/README.md                       | 123 +++++++
 .../Public_Enhancement_Services/PublicService.md   |  34 ++
 .../Public_Enhancement_Services/README.md          |  91 +++++
 .../en_US/Architecture_Documents/README.md         |  18 +
 .../Deployment_Documents/Cluster_Deployment.md     |  98 +++++
 .../EngineConnPlugin_installation_document.md      |  82 +++++
 ...75\262\345\276\256\346\234\215\345\212\241.png" | Bin 0 -> 130148 bytes
 .../Installation_Hierarchical_Structure.md         | 198 ++++++++++
 .../Deployment_Documents/Quick_Deploy_Linkis1.0.md | 246 +++++++++++++
 .../en_US/Development_Documents/Contributing.md    | 195 ++++++++++
 .../Development_Specification/API.md               | 143 ++++++++
 .../Development_Specification/Concurrent.md        |  17 +
 .../Development_Specification/Exception_Catch.md   |   9 +
 .../Development_Specification/Exception_Throws.md  |  52 +++
 .../Development_Specification/Log.md               |  13 +
 .../Development_Specification/Path_Usage.md        |  15 +
 .../Development_Specification/README.md            |   9 +
 .../Linkis_Compilation_Document.md                 | 135 +++++++
 .../Linkis_Compile_and_Package.md                  | 155 ++++++++
 .../en_US/Development_Documents/Linkis_DEBUG.md    | 141 ++++++++
 .../New_EngineConn_Development.md                  |  77 ++++
 .../Hive_User_Manual.md                            |  81 +++++
 .../JDBC_User_Manual.md                            |  53 +++
 .../Python_User_Manual.md                          |  61 ++++
 .../en_US/Engine_Usage_Documentations/README.md    |  25 ++
 .../Shell_User_Manual.md                           |  55 +++
 .../Spark_User_Manual.md                           |  91 +++++
 .../add_an_EngineConn_flow_chart.png               | Bin 0 -> 59893 bytes
 .../Architecture/EngineConn/engineconn-01.png      | Bin 0 -> 157753 bytes
 .../Gateway/gateway_server_dispatcher.png          | Bin 0 -> 83743 bytes
 .../Architecture/Gateway/gateway_server_global.png | Bin 0 -> 85272 bytes
 .../Architecture/Gateway/gatway_websocket.png      | Bin 0 -> 37769 bytes
 .../execution.png                                  | Bin 0 -> 31078 bytes
 .../orchestrate.png                                | Bin 0 -> 31095 bytes
 .../overall.png                                    | Bin 0 -> 231192 bytes
 .../physical_tree.png                              | Bin 0 -> 79471 bytes
 .../result_acquisition.png                         | Bin 0 -> 41007 bytes
 .../submission.png                                 | Bin 0 -> 12946 bytes
 .../LabelManager/label_manager_builder.png         | Bin 0 -> 62978 bytes
 .../LabelManager/label_manager_global.png          | Bin 0 -> 14988 bytes
 .../LabelManager/label_manager_scorer.png          | Bin 0 -> 72977 bytes
 .../Linkis0.X-NewEngine-architecture.png           | Bin 0 -> 244826 bytes
 .../Architecture/Linkis0.X-services-list.png       | Bin 0 -> 66821 bytes
 .../Linkis1.0-EngineConn-architecture.png          | Bin 0 -> 157753 bytes
 .../Linkis1.0-NewEngine-architecture.png           | Bin 0 -> 26523 bytes
 .../Images/Architecture/Linkis1.0-architecture.png | Bin 0 -> 212362 bytes
 .../Linkis1.0-newEngine-initialization.png         | Bin 0 -> 48313 bytes
 .../Architecture/Linkis1.0-services-list.png       | Bin 0 -> 85890 bytes
 .../Architecture/PublicEnhencementArchitecture.png | Bin 0 -> 47158 bytes
 .../ContextService/linkis-contextservice-01.png    | Bin 0 -> 22692 bytes
 .../ContextService/linkis-contextservice-02.png    | Bin 0 -> 10655 bytes
 .../linkis-contextservice-cache-01.png             | Bin 0 -> 11881 bytes
 .../linkis-contextservice-cache-02.png             | Bin 0 -> 23902 bytes
 .../linkis-contextservice-cache-03.png             | Bin 0 -> 109334 bytes
 .../linkis-contextservice-cache-04.png             | Bin 0 -> 36161 bytes
 .../linkis-contextservice-cache-05.png             | Bin 0 -> 2265 bytes
 .../linkis-contextservice-client-01.png            | Bin 0 -> 54438 bytes
 .../linkis-contextservice-client-02.png            | Bin 0 -> 93036 bytes
 .../linkis-contextservice-client-03.png            | Bin 0 -> 34839 bytes
 .../ContextService/linkis-contextservice-ha-01.png | Bin 0 -> 38439 bytes
 .../ContextService/linkis-contextservice-ha-02.png | Bin 0 -> 21982 bytes
 .../ContextService/linkis-contextservice-ha-03.png | Bin 0 -> 91788 bytes
 .../ContextService/linkis-contextservice-ha-04.png | Bin 0 -> 40733 bytes
 .../linkis-contextservice-listener-01.png          | Bin 0 -> 24414 bytes
 .../linkis-contextservice-listener-02.png          | Bin 0 -> 46152 bytes
 .../linkis-contextservice-listener-03.png          | Bin 0 -> 32597 bytes
 .../linkis-contextservice-persistence-01.png       | Bin 0 -> 198797 bytes
 .../linkis-contextservice-search-01.png            | Bin 0 -> 33731 bytes
 .../linkis-contextservice-search-02.png            | Bin 0 -> 26768 bytes
 .../linkis-contextservice-search-03.png            | Bin 0 -> 33312 bytes
 .../linkis-contextservice-search-04.png            | Bin 0 -> 25192 bytes
 .../linkis-contextservice-search-05.png            | Bin 0 -> 24757 bytes
 .../linkis-contextservice-search-06.png            | Bin 0 -> 29923 bytes
 .../linkis-contextservice-search-07.png            | Bin 0 -> 30013 bytes
 .../linkis-contextservice-service-01.png           | Bin 0 -> 56235 bytes
 .../linkis-contextservice-service-02.png           | Bin 0 -> 73463 bytes
 .../linkis-contextservice-service-03.png           | Bin 0 -> 23477 bytes
 .../linkis-contextservice-service-04.png           | Bin 0 -> 27387 bytes
 .../en_US/Images/Architecture/bml-02.png           | Bin 0 -> 55227 bytes
 .../Architecture/linkis-engineConnPlugin-01.png    | Bin 0 -> 21864 bytes
 .../en_US/Images/Architecture/linkis-intro-01.png  | Bin 0 -> 413878 bytes
 .../en_US/Images/Architecture/linkis-intro-02.png  | Bin 0 -> 355186 bytes
 .../Architecture/linkis-microservice-gov-01.png    | Bin 0 -> 109909 bytes
 .../Architecture/linkis-microservice-gov-03.png    | Bin 0 -> 83457 bytes
 .../Architecture/linkis-publicService-01.png       | Bin 0 -> 62443 bytes
 .../en_US/Images/EngineUsage/hive-config.png       | Bin 0 -> 86864 bytes
 .../en_US/Images/EngineUsage/hive-run.png          | Bin 0 -> 94294 bytes
 .../en_US/Images/EngineUsage/jdbc-conf.png         | Bin 0 -> 91609 bytes
 .../en_US/Images/EngineUsage/jdbc-run.png          | Bin 0 -> 56438 bytes
 .../en_US/Images/EngineUsage/pyspakr-run.png       | Bin 0 -> 124979 bytes
 .../en_US/Images/EngineUsage/python-config.png     | Bin 0 -> 92997 bytes
 .../en_US/Images/EngineUsage/python-run.png        | Bin 0 -> 89641 bytes
 .../en_US/Images/EngineUsage/queue-set.png         | Bin 0 -> 93935 bytes
 .../en_US/Images/EngineUsage/scala-run.png         | Bin 0 -> 125060 bytes
 .../en_US/Images/EngineUsage/shell-run.png         | Bin 0 -> 209553 bytes
 .../en_US/Images/EngineUsage/spark-conf.png        | Bin 0 -> 99930 bytes
 .../en_US/Images/EngineUsage/sparksql-run.png      | Bin 0 -> 121699 bytes
 .../en_US/Images/EngineUsage/workflow.png          | Bin 0 -> 151481 bytes
 .../en_US/Images/Linkis_1.0_architecture.png       | Bin 0 -> 316746 bytes
 .../Images/Tuning_and_Troubleshooting/Q&A.png      | Bin 0 -> 161638 bytes
 .../Tuning_and_Troubleshooting/code-fix-01.png     | Bin 0 -> 199523 bytes
 .../Tuning_and_Troubleshooting/db-config-01.png    | Bin 0 -> 391789 bytes
 .../Tuning_and_Troubleshooting/db-config-02.png    | Bin 0 -> 60334 bytes
 .../Images/Tuning_and_Troubleshooting/debug-01.png | Bin 0 -> 6168 bytes
 .../Images/Tuning_and_Troubleshooting/debug-02.png | Bin 0 -> 62496 bytes
 .../Images/Tuning_and_Troubleshooting/debug-03.png | Bin 0 -> 32875 bytes
 .../Images/Tuning_and_Troubleshooting/debug-04.png | Bin 0 -> 111758 bytes
 .../Images/Tuning_and_Troubleshooting/debug-05.png | Bin 0 -> 52040 bytes
 .../Images/Tuning_and_Troubleshooting/debug-06.png | Bin 0 -> 63668 bytes
 .../Images/Tuning_and_Troubleshooting/debug-07.png | Bin 0 -> 316176 bytes
 .../Images/Tuning_and_Troubleshooting/debug-08.png | Bin 0 -> 27722 bytes
 .../Tuning_and_Troubleshooting/hive-config-01.png  | Bin 0 -> 76327 bytes
 .../linkis-exception-01.png                        | Bin 0 -> 1199628 bytes
 .../linkis-exception-02.png                        | Bin 0 -> 1366293 bytes
 .../linkis-exception-03.png                        | Bin 0 -> 646836 bytes
 .../linkis-exception-04.png                        | Bin 0 -> 2965676 bytes
 .../linkis-exception-05.png                        | Bin 0 -> 454949 bytes
 .../linkis-exception-06.png                        | Bin 0 -> 869492 bytes
 .../linkis-exception-07.png                        | Bin 0 -> 2249882 bytes
 .../linkis-exception-08.png                        | Bin 0 -> 1191728 bytes
 .../linkis-exception-09.png                        | Bin 0 -> 1008341 bytes
 .../linkis-exception-10.png                        | Bin 0 -> 322110 bytes
 .../Tuning_and_Troubleshooting/page-show-01.png    | Bin 0 -> 115010 bytes
 .../Tuning_and_Troubleshooting/page-show-02.png    | Bin 0 -> 576911 bytes
 .../Tuning_and_Troubleshooting/page-show-03.png    | Bin 0 -> 654609 bytes
 .../searching_keywords.png                         | Bin 0 -> 102094 bytes
 .../Tuning_and_Troubleshooting/shell-error-01.png  | Bin 0 -> 74682 bytes
 .../Tuning_and_Troubleshooting/shell-error-02.png  | Bin 0 -> 330735 bytes
 .../Tuning_and_Troubleshooting/shell-error-03.png  | Bin 0 -> 1624375 bytes
 .../Tuning_and_Troubleshooting/shell-error-04.png  | Bin 0 -> 803920 bytes
 .../Tuning_and_Troubleshooting/shell-error-05.png  | Bin 0 -> 179543 bytes
 .../Tunning_And_Troubleshooting/debug-01.png       | Bin 0 -> 6168 bytes
 .../Tunning_And_Troubleshooting/debug-02.png       | Bin 0 -> 62496 bytes
 .../Tunning_And_Troubleshooting/debug-03.png       | Bin 0 -> 32875 bytes
 .../Tunning_And_Troubleshooting/debug-04.png       | Bin 0 -> 111758 bytes
 .../Tunning_And_Troubleshooting/debug-05.png       | Bin 0 -> 52040 bytes
 .../Tunning_And_Troubleshooting/debug-06.png       | Bin 0 -> 63668 bytes
 .../Tunning_And_Troubleshooting/debug-07.png       | Bin 0 -> 316176 bytes
 .../Tunning_And_Troubleshooting/debug-08.png       | Bin 0 -> 27722 bytes
 .../deployment/Linkis1.0_combined_eureka.png       | Bin 0 -> 134418 bytes
 .../en_US/Images/wedatasphere_contact_01.png       | Bin 0 -> 217762 bytes
 .../en_US/Images/wedatasphere_stack_Linkis.png     | Bin 0 -> 203466 bytes
 .../Tuning_and_Troubleshooting/Configuration.md    | 217 +++++++++++
 .../en_US/Tuning_and_Troubleshooting/Q&A.md        | 255 +++++++++++++
 .../en_US/Tuning_and_Troubleshooting/README.md     |  98 +++++
 .../en_US/Tuning_and_Troubleshooting/Tuning.md     |  61 ++++
 .../Linkis_Upgrade_from_0.x_to_1.0_guide.md        |  73 ++++
 .../en_US/Upgrade_Documents/README.md              |   5 +
 .../en_US/User_Manual/How_To_Use_Linkis.md         |  29 ++
 .../en_US/User_Manual/Linkis1.0_User_Manual.md     | 400 +++++++++++++++++++++
 .../en_US/User_Manual/LinkisCli_Usage_document.md  | 191 ++++++++++
 .../User_Manual/Linkis_Console_User_Manual.md      | 120 +++++++
 Linkis-Doc-master/en_US/User_Manual/README.md      |   8 +
 ...\350\241\214RestAPI\346\226\207\346\241\243.md" | 171 +++++++++
 .../zh_CN/API_Documentations/Login_API.md          | 131 +++++++
 .../zh_CN/API_Documentations/README.md             |   8 +
 ...350\241\214JDBC_API\346\226\207\346\241\243.md" |  46 +++
 .../Commons/messagescheduler.md                    |  15 +
 .../zh_CN/Architecture_Documents/Commons/rpc.md    |  17 +
 .../EngineConn/README.md                           |  98 +++++
 .../ECM\346\236\266\346\236\204\345\233\276.png"   | Bin 0 -> 34340 bytes
 ...57\267\346\261\202\346\265\201\347\250\213.png" | Bin 0 -> 25340 bytes
 .../EngineConnManager/README.md                    |  49 +++
 .../EngineConnPlugin/README.md                     |  71 ++++
 .../Entrance/Entrance.md                           |  26 ++
 .../LinkisClient/README.md                         |  35 ++
 .../LinkisManager/AppManager.md                    |  45 +++
 .../LinkisManager/LabelManager.md                  |  40 +++
 .../LinkisManager/README.md                        |  74 ++++
 .../LinkisManager/ResourceManager.md               | 145 ++++++++
 .../Computation_Governance_Services/README.md      |  66 ++++
 ...226\260\345\242\236\346\265\201\347\250\213.md" | 111 ++++++
 ...211\247\350\241\214\346\265\201\347\250\213.md" | 165 +++++++++
 ...214\272\345\210\253\347\256\200\350\277\260.md" |  98 +++++
 .../Microservice_Governance_Services/Gateway.md    |  30 ++
 .../Microservice_Governance_Services/README.md     |  23 ++
 .../Computation_Orchestrator_architecture.md       |  18 +
 ...16\245\345\217\243\345\222\214\347\261\273.png" | Bin 0 -> 27266 bytes
 ...72\244\344\272\222\346\265\201\347\250\213.png" | Bin 0 -> 30134 bytes
 ...16\245\345\217\243\345\222\214\347\261\273.png" | Bin 0 -> 162100 bytes
 .../Orchestrator/Orchestrator_CheckRuler.md        |  27 ++
 .../Orchestrator/Orchestrator_ECMP_architecture.md |  32 ++
 .../Orchestrator_Execution_architecture_doc.md     |  19 +
 .../Orchestrator_Operation_architecture_doc.md     |  26 ++
 .../Orchestrator_Reheater_architecture.md          |  12 +
 .../Orchestrator_Transform_architecture.md         |  12 +
 .../Orchestrator/Orchestrator_architecture_doc.md  | 113 ++++++
 .../Architecture_Documents/Orchestrator/README.md  |  55 +++
 .../Public_Enhancement_Services/BML.md             |  94 +++++
 .../ContextService/ContextService_Cache.md         |  95 +++++
 .../ContextService/ContextService_Client.md        |  61 ++++
 .../ContextService/ContextService_HighAvailable.md |  86 +++++
 .../ContextService/ContextService_Listener.md      |  33 ++
 .../ContextService/ContextService_Persistence.md   |   8 +
 .../ContextService/ContextService_Search.md        | 127 +++++++
 .../ContextService/ContextService_Service.md       |  55 +++
 .../ContextService/README.md                       | 124 +++++++
 .../Public_Enhancement_Services/DataSource.md      |   1 +
 .../Public_Enhancement_Services/PublicService.md   |  31 ++
 .../Public_Enhancement_Services/README.md          |  91 +++++
 .../zh_CN/Architecture_Documents/README.md         |  24 ++
 .../Deployment_Documents/Cluster_Deployment.md     | 100 ++++++
 ...256\211\350\243\205\346\226\207\346\241\243.md" | 106 ++++++
 ...75\262\345\276\256\346\234\215\345\212\241.png" | Bin 0 -> 130148 bytes
 .../Installation_Hierarchical_Structure.md         | 186 ++++++++++
 .../zh_CN/Deployment_Documents/README.md           |   1 +
 ...256\211\350\243\205\346\226\207\346\241\243.md" | 110 ++++++
 ...51\200\237\351\203\250\347\275\262Linkis1.0.md" | 256 +++++++++++++
 .../zh_CN/Development_Documents/Contributing.md    | 206 +++++++++++
 .../zh_CN/Development_Documents/DEBUG_LINKIS.md    | 113 ++++++
 .../Development_Specification/API.md               |  72 ++++
 .../Development_Specification/Concurrent.md        |   9 +
 .../Development_Specification/Exception_Catch.md   |   9 +
 .../Development_Specification/Exception_Throws.md  |  30 ++
 .../Development_Specification/Log.md               |  13 +
 .../Development_Specification/Path_Usage.md        |   8 +
 .../Development_Specification/README.md            |  12 +
 ...274\226\350\257\221\346\226\207\346\241\243.md" | 160 +++++++++
 .../New_EngineConn_Development.md                  |  79 ++++
 .../zh_CN/Development_Documents/README.md          |   1 +
 .../zh_CN/Development_Documents/Web/Build.md       |  84 +++++
 .../zh_CN/Development_MEETUP/Phase_One/README.md   |  56 +++
 .../zh_CN/Development_MEETUP/Phase_One/chapter1.md |   1 +
 .../zh_CN/Development_MEETUP/Phase_One/chapter2.md |   1 +
 .../Development_MEETUP/Phase_Two/Images/Q&A.png    | Bin 0 -> 161638 bytes
 .../Development_MEETUP/Phase_Two/Images/issue.png  | Bin 0 -> 102094 bytes
 .../Phase_Two/Images/\345\217\214\346\264\273.png" | Bin 0 -> 130148 bytes
 .../Images2/0ca28635de253f245743fbf0a7cfe165.png   | Bin 0 -> 98316 bytes
 .../Images2/146a58addcacbc560a33604b00636dee.png   | Bin 0 -> 44890 bytes
 .../Images2/1730acb1c4ff58a055fa71324e5c7f2c.png   | Bin 0 -> 95491 bytes
 .../Images2/1d31b398318acbd862f20ac05decbce9.png   | Bin 0 -> 7741 bytes
 .../Images2/1d8f043dae5afdf07371ad31b06bad6e.png   | Bin 0 -> 74243 bytes
 .../Images2/232983a712a949196159f0aeab7de7f5.png   | Bin 0 -> 150575 bytes
 .../Images2/2767bac623d10bf45033cf9fdd8d197f.png   | Bin 0 -> 120905 bytes
 .../Images2/335dabbf46b5af11e494cdd1be2c32a1.png   | Bin 0 -> 118394 bytes
 .../Images2/491e9a0fbd5b0121f228e0f7938cf168.png   | Bin 0 -> 120419 bytes
 .../Images2/781914abed8ec4955cac520eb0a1be7e.png   | Bin 0 -> 770399 bytes
 .../Images2/7b8685204636771776605bab99b08e8f.png   | Bin 0 -> 82550 bytes
 .../Images2/7cbe7cd81ce2212883741dd9b62dad18.png   | Bin 0 -> 36588 bytes
 .../Images2/8576fe8054c072a7fee53d98eeefa004.png   | Bin 0 -> 39623 bytes
 .../Images2/87ef54ccaa6b96abc30e612636bb2e90.png   | Bin 0 -> 103943 bytes
 .../Images2/9693ded0c6a9c32cb1ff33713e5d3864.png   | Bin 0 -> 54885 bytes
 .../Images2/9c254ec33125eb0ab50a6bcc0e95a18a.png   | Bin 0 -> 145675 bytes
 .../Images2/a0fb7e3474dff5c22fb3c230f73fa6f6.png   | Bin 0 -> 55052 bytes
 .../Images2/b68f441d7ac6b4814c048d35cebbb25d.png   | Bin 0 -> 117177 bytes
 .../Images2/b7feb36a0322b002f9f85f0a8003dcc1.png   | Bin 0 -> 169905 bytes
 .../Images2/ba90e28a78375103c4890cd448818ab3.png   | Bin 0 -> 132653 bytes
 .../Images2/c3f5ac1723ba9823084f529f5384440d.png   | Bin 0 -> 21078 bytes
 .../Images2/cd3ea323b238158c8a3de8acc8ec0a3f.png   | Bin 0 -> 20051 bytes
 .../Images2/d0fe37b4aa34b0cea9e87247b7b17943.png   | Bin 0 -> 115496 bytes
 .../Images2/d1b4759745056add53a32a76d3699109.png   | Bin 0 -> 23378 bytes
 .../Images2/d9bab9306cc28ecdf8d3679ecfc224d4.png   | Bin 0 -> 97351 bytes
 .../Images2/da0cf9cb7b27dac266435b5f6ad1cd82.png   | Bin 0 -> 45877 bytes
 .../Images2/de301f8f21c1735c5e018188d685ad74.png   | Bin 0 -> 53369 bytes
 .../Images2/e7e2a98ce1f03d228c7c2d782b076d53.png   | Bin 0 -> 81483 bytes
 .../Images2/f395c9cc338d85e258485658290bf365.png   | Bin 0 -> 43688 bytes
 .../Images2/f6fa083cab060a5adc9d483b37d040f5.png   | Bin 0 -> 60331 bytes
 .../Images2/fb952c266ce9a8db9b9036a602e222a7.png   | Bin 0 -> 131953 bytes
 .../zh_CN/Development_MEETUP/Phase_Two/README.md   |  58 +++
 .../zh_CN/Development_MEETUP/Phase_Two/chapter1.md | 371 +++++++++++++++++++
 .../zh_CN/Development_MEETUP/Phase_Two/chapter2.md | 251 +++++++++++++
 .../zh_CN/Development_MEETUP/README.md             |   1 +
 .../ElasticSearch_User_Manual.md                   |   1 +
 .../Hive_User_Manual.md                            |  81 +++++
 .../JDBC_User_Manual.md                            |  53 +++
 .../MLSQL_User_Manual.md                           |   1 +
 .../Presto_User_Manual.md                          |   1 +
 .../Python_User_Manual.md                          |  61 ++++
 .../zh_CN/Engine_Usage_Documentations/README.md    |  25 ++
 .../Shell_User_Manual.md                           |  57 +++
 .../Spark_User_Manual.md                           |  91 +++++
 .../zh_CN/Images/Architecture/AppManager-02.png    | Bin 0 -> 701283 bytes
 .../zh_CN/Images/Architecture/AppManager-03.png    | Bin 0 -> 69489 bytes
 .../Commons/linkis-message-scheduler.png           | Bin 0 -> 26987 bytes
 .../Images/Architecture/Commons/linkis-rpc.png     | Bin 0 -> 23403 bytes
 .../Architecture/EngineConn/engineconn-01.png      | Bin 0 -> 157753 bytes
 .../EngineConnPlugin/engine_conn_plugin_cycle.png  | Bin 0 -> 49326 bytes
 .../EngineConnPlugin/engine_conn_plugin_global.png | Bin 0 -> 32292 bytes
 .../EngineConnPlugin/engine_conn_plugin_load.png   | Bin 0 -> 74821 bytes
 ...26\260\345\242\236\346\265\201\347\250\213.png" | Bin 0 -> 59893 bytes
 .../Gateway/gateway_server_dispatcher.png          | Bin 0 -> 83743 bytes
 .../Architecture/Gateway/gateway_server_global.png | Bin 0 -> 85272 bytes
 .../Architecture/Gateway/gatway_websocket.png      | Bin 0 -> 37769 bytes
 .../Physical\346\240\221.png"                      | Bin 0 -> 79471 bytes
 ...56\265\346\265\201\347\250\213\345\233\276.png" | Bin 0 -> 31078 bytes
 ...56\265\346\265\201\347\250\213\345\233\276.png" | Bin 0 -> 12946 bytes
 ...16\267\345\217\226\346\265\201\347\250\213.png" | Bin 0 -> 41007 bytes
 ...16\222\346\265\201\347\250\213\345\233\276.png" | Bin 0 -> 31095 bytes
 ...75\223\346\265\201\347\250\213\345\233\276.png" | Bin 0 -> 231192 bytes
 .../LabelManager/label_manager_builder.png         | Bin 0 -> 62978 bytes
 .../LabelManager/label_manager_global.png          | Bin 0 -> 14988 bytes
 .../LabelManager/label_manager_scorer.png          | Bin 0 -> 72977 bytes
 .../Images/Architecture/Linkis1.0-architecture.png | Bin 0 -> 221751 bytes
 .../Architecture/LinkisManager/AppManager-01.png   | Bin 0 -> 69489 bytes
 .../Architecture/LinkisManager/LabelManager-01.png | Bin 0 -> 39221 bytes
 .../LinkisManager/LinkisManager-01.png             | Bin 0 -> 183082 bytes
 .../LinkisManager/ResourceManager-01.png           | Bin 0 -> 71086 bytes
 ...cement\346\236\266\346\236\204\345\233\276.png" | Bin 0 -> 47158 bytes
 .../ContextService/linkis-contextservice-01.png    | Bin 0 -> 22692 bytes
 .../ContextService/linkis-contextservice-02.png    | Bin 0 -> 10655 bytes
 .../linkis-contextservice-cache-01.png             | Bin 0 -> 11881 bytes
 .../linkis-contextservice-cache-02.png             | Bin 0 -> 23902 bytes
 .../linkis-contextservice-cache-03.png             | Bin 0 -> 109334 bytes
 .../linkis-contextservice-cache-04.png             | Bin 0 -> 36161 bytes
 .../linkis-contextservice-cache-05.png             | Bin 0 -> 2265 bytes
 .../linkis-contextservice-client-01.png            | Bin 0 -> 54438 bytes
 .../linkis-contextservice-client-02.png            | Bin 0 -> 93036 bytes
 .../linkis-contextservice-client-03.png            | Bin 0 -> 34839 bytes
 .../ContextService/linkis-contextservice-ha-01.png | Bin 0 -> 38439 bytes
 .../ContextService/linkis-contextservice-ha-02.png | Bin 0 -> 21982 bytes
 .../ContextService/linkis-contextservice-ha-03.png | Bin 0 -> 91788 bytes
 .../ContextService/linkis-contextservice-ha-04.png | Bin 0 -> 40733 bytes
 .../linkis-contextservice-listener-01.png          | Bin 0 -> 24414 bytes
 .../linkis-contextservice-listener-02.png          | Bin 0 -> 46152 bytes
 .../linkis-contextservice-listener-03.png          | Bin 0 -> 32597 bytes
 .../linkis-contextservice-persistence-01.png       | Bin 0 -> 198797 bytes
 .../linkis-contextservice-search-01.png            | Bin 0 -> 33731 bytes
 .../linkis-contextservice-search-02.png            | Bin 0 -> 26768 bytes
 .../linkis-contextservice-search-03.png            | Bin 0 -> 33312 bytes
 .../linkis-contextservice-search-04.png            | Bin 0 -> 25192 bytes
 .../linkis-contextservice-search-05.png            | Bin 0 -> 24757 bytes
 .../linkis-contextservice-search-06.png            | Bin 0 -> 29923 bytes
 .../linkis-contextservice-search-07.png            | Bin 0 -> 30013 bytes
 .../linkis-contextservice-service-01.png           | Bin 0 -> 56235 bytes
 .../linkis-contextservice-service-02.png           | Bin 0 -> 73463 bytes
 .../linkis-contextservice-service-03.png           | Bin 0 -> 23477 bytes
 .../linkis-contextservice-service-04.png           | Bin 0 -> 27387 bytes
 .../zh_CN/Images/Architecture/bml-01.png           | Bin 0 -> 78801 bytes
 .../zh_CN/Images/Architecture/bml-02.png           | Bin 0 -> 55227 bytes
 .../zh_CN/Images/Architecture/linkis-client-01.png | Bin 0 -> 88633 bytes
 .../Architecture/linkis-computation-gov-01.png     | Bin 0 -> 89527 bytes
 .../Architecture/linkis-computation-gov-02.png     | Bin 0 -> 179368 bytes
 .../Architecture/linkis-engineConnPlugin-01.png    | Bin 0 -> 21864 bytes
 .../Images/Architecture/linkis-entrance-01.png     | Bin 0 -> 33102 bytes
 .../zh_CN/Images/Architecture/linkis-intro-01.jpg  | Bin 0 -> 341150 bytes
 .../zh_CN/Images/Architecture/linkis-intro-02.jpg  | Bin 0 -> 289769 bytes
 .../Architecture/linkis-microservice-gov-01.png    | Bin 0 -> 89404 bytes
 .../Architecture/linkis-microservice-gov-03.png    | Bin 0 -> 60074 bytes
 .../linkis-computation-orchestrator-01.png         | Bin 0 -> 53527 bytes
 .../linkis-computation-orchestrator-02.png         | Bin 0 -> 77543 bytes
 .../orchestrator/execution/execution.png           | Bin 0 -> 29487 bytes
 .../orchestrator/execution/execution01.png         | Bin 0 -> 55090 bytes
 .../linkis_orchestrator_architecture.png           | Bin 0 -> 51935 bytes
 .../orchestrator/operation/operation_class.png     | Bin 0 -> 36916 bytes
 .../orchestrator/overall/Orchestrator01.png        | Bin 0 -> 38900 bytes
 .../orchestrator/overall/Orchestrator_Logical.png  | Bin 0 -> 46510 bytes
 .../orchestrator/overall/Orchestrator_Physical.png | Bin 0 -> 52228 bytes
 .../orchestrator/overall/Orchestrator_arc.png      | Bin 0 -> 32345 bytes
 .../orchestrator/overall/Orchestrator_ast.png      | Bin 0 -> 24733 bytes
 .../orchestrator/overall/Orchestrator_cache.png    | Bin 0 -> 96643 bytes
 .../orchestrator/overall/Orchestrator_command.png  | Bin 0 -> 29349 bytes
 .../overall/Orchestrator_computation.png           | Bin 0 -> 64070 bytes
 .../orchestrator/overall/Orchestrator_progress.png | Bin 0 -> 92726 bytes
 .../orchestrator/overall/Orchestrator_reheat.png   | Bin 0 -> 82286 bytes
 .../overall/Orchestrator_transication.png          | Bin 0 -> 63174 bytes
 .../orchestrator/overall/orchestrator_entity.png   | Bin 0 -> 29307 bytes
 .../reheater/linkis-orchestrator-reheater-01.png   | Bin 0 -> 22631 bytes
 .../transform/linkis-orchestrator-transform-01.png | Bin 0 -> 21241 bytes
 .../zh_CN/Images/Architecture/rm-01.png            | Bin 0 -> 183082 bytes
 .../zh_CN/Images/Architecture/rm-02.png            | Bin 0 -> 71086 bytes
 .../zh_CN/Images/Architecture/rm-03.png            | Bin 0 -> 52466 bytes
 .../zh_CN/Images/Architecture/rm-04.png            | Bin 0 -> 36324 bytes
 .../zh_CN/Images/Architecture/rm-05.png            | Bin 0 -> 34066 bytes
 .../zh_CN/Images/Architecture/rm-06.png            | Bin 0 -> 44105 bytes
 .../zh_CN/Images/EngineUsage/hive-config.png       | Bin 0 -> 127024 bytes
 .../zh_CN/Images/EngineUsage/hive-run.png          | Bin 0 -> 94294 bytes
 .../zh_CN/Images/EngineUsage/jdbc-conf.png         | Bin 0 -> 128381 bytes
 .../zh_CN/Images/EngineUsage/jdbc-run.png          | Bin 0 -> 56438 bytes
 .../zh_CN/Images/EngineUsage/pyspakr-run.png       | Bin 0 -> 124979 bytes
 .../zh_CN/Images/EngineUsage/python-config.png     | Bin 0 -> 129842 bytes
 .../zh_CN/Images/EngineUsage/python-run.png        | Bin 0 -> 89641 bytes
 .../zh_CN/Images/EngineUsage/queue-set.png         | Bin 0 -> 115340 bytes
 .../zh_CN/Images/EngineUsage/scala-run.png         | Bin 0 -> 125060 bytes
 .../zh_CN/Images/EngineUsage/shell-run.png         | Bin 0 -> 209553 bytes
 .../zh_CN/Images/EngineUsage/spark-conf.png        | Bin 0 -> 178501 bytes
 .../zh_CN/Images/EngineUsage/sparksql-run.png      | Bin 0 -> 121699 bytes
 .../zh_CN/Images/EngineUsage/workflow.png          | Bin 0 -> 151481 bytes
 .../zh_CN/Images/Introduction/introduction.png     | Bin 0 -> 90686 bytes
 .../Images/Tuning_and_Troubleshooting/Q&A.png      | Bin 0 -> 161638 bytes
 .../Tuning_and_Troubleshooting/code-fix-01.png     | Bin 0 -> 199523 bytes
 .../Tuning_and_Troubleshooting/db-config-01.png    | Bin 0 -> 391789 bytes
 .../Tuning_and_Troubleshooting/db-config-02.png    | Bin 0 -> 60334 bytes
 .../Images/Tuning_and_Troubleshooting/debug-01.png | Bin 0 -> 6168 bytes
 .../Images/Tuning_and_Troubleshooting/debug-02.png | Bin 0 -> 62496 bytes
 .../Images/Tuning_and_Troubleshooting/debug-03.png | Bin 0 -> 32875 bytes
 .../Images/Tuning_and_Troubleshooting/debug-04.png | Bin 0 -> 111758 bytes
 .../Images/Tuning_and_Troubleshooting/debug-05.png | Bin 0 -> 52040 bytes
 .../Images/Tuning_and_Troubleshooting/debug-06.png | Bin 0 -> 63668 bytes
 .../Images/Tuning_and_Troubleshooting/debug-07.png | Bin 0 -> 316176 bytes
 .../Images/Tuning_and_Troubleshooting/debug-08.png | Bin 0 -> 27722 bytes
 .../Tuning_and_Troubleshooting/hive-config-01.png  | Bin 0 -> 76327 bytes
 .../linkis-exception-01.png                        | Bin 0 -> 1199628 bytes
 .../linkis-exception-02.png                        | Bin 0 -> 1366293 bytes
 .../linkis-exception-03.png                        | Bin 0 -> 646836 bytes
 .../linkis-exception-04.png                        | Bin 0 -> 2965676 bytes
 .../linkis-exception-05.png                        | Bin 0 -> 454949 bytes
 .../linkis-exception-06.png                        | Bin 0 -> 869492 bytes
 .../linkis-exception-07.png                        | Bin 0 -> 2249882 bytes
 .../linkis-exception-08.png                        | Bin 0 -> 1191728 bytes
 .../linkis-exception-09.png                        | Bin 0 -> 1008341 bytes
 .../linkis-exception-10.png                        | Bin 0 -> 322110 bytes
 .../Tuning_and_Troubleshooting/page-show-01.png    | Bin 0 -> 115010 bytes
 .../Tuning_and_Troubleshooting/page-show-02.png    | Bin 0 -> 576911 bytes
 .../Tuning_and_Troubleshooting/page-show-03.png    | Bin 0 -> 654609 bytes
 .../searching_keywords.png                         | Bin 0 -> 102094 bytes
 .../Tuning_and_Troubleshooting/shell-error-01.png  | Bin 0 -> 74682 bytes
 .../Tuning_and_Troubleshooting/shell-error-02.png  | Bin 0 -> 330735 bytes
 .../Tuning_and_Troubleshooting/shell-error-03.png  | Bin 0 -> 1624375 bytes
 .../Tuning_and_Troubleshooting/shell-error-04.png  | Bin 0 -> 803920 bytes
 .../Tuning_and_Troubleshooting/shell-error-05.png  | Bin 0 -> 179543 bytes
 Linkis-Doc-master/zh_CN/Images/after_linkis_cn.png | Bin 0 -> 645519 bytes
 .../zh_CN/Images/before_linkis_cn.png              | Bin 0 -> 332201 bytes
 .../deployment/Linkis1.0_combined_eureka.png       | Bin 0 -> 134418 bytes
 Linkis-Doc-master/zh_CN/README.md                  |  87 +++++
 Linkis-Doc-master/zh_CN/SUMMARY.md                 |  69 ++++
 .../Tuning_and_Troubleshooting/Configuration.md    | 220 ++++++++++++
 .../zh_CN/Tuning_and_Troubleshooting/Q&A.md        | 257 +++++++++++++
 .../zh_CN/Tuning_and_Troubleshooting/README.md     | 112 ++++++
 .../zh_CN/Tuning_and_Troubleshooting/Tuning.md     |  50 +++
 ...\247\345\210\2601.0\346\214\207\345\215\227.md" |  73 ++++
 .../zh_CN/Upgrade_Documents/README.md              |   6 +
 .../zh_CN/User_Manual/How_To_Use_Linkis.md         |  20 ++
 ...74\225\346\223\216\344\277\241\346\201\257.png" | Bin 0 -> 89529 bytes
 ...56\241\347\220\206\347\225\214\351\235\242.png" | Bin 0 -> 43765 bytes
 ...74\226\350\276\221\347\225\214\351\235\242.png" | Bin 0 -> 64470 bytes
 ...63\250\345\206\214\344\270\255\345\277\203.png" | Bin 0 -> 327966 bytes
 ...37\245\350\257\242\346\214\211\351\222\256.png" | Bin 0 -> 81788 bytes
 ...16\206\345\217\262\347\225\214\351\235\242.png" | Bin 0 -> 82340 bytes
 ...17\230\351\207\217\347\225\214\351\235\242.png" | Bin 0 -> 40073 bytes
 ...11\247\350\241\214\346\227\245\345\277\227.png" | Bin 0 -> 114314 bytes
 ...05\215\347\275\256\347\225\214\351\235\242.png" | Bin 0 -> 79698 bytes
 ...56\241\347\220\206\347\225\214\351\235\242.png" | Bin 0 -> 39198 bytes
 ...72\224\347\224\250\347\261\273\345\236\213.png" | Bin 0 -> 108864 bytes
 ...74\225\346\223\216\344\277\241\346\201\257.png" | Bin 0 -> 41814 bytes
 ...20\206\345\221\230\350\247\206\345\233\276.png" | Bin 0 -> 80087 bytes
 ...74\226\350\276\221\347\233\256\345\275\225.png" | Bin 0 -> 89919 bytes
 ...56\241\347\220\206\347\225\214\351\235\242.png" | Bin 0 -> 49277 bytes
 ...275\277\347\224\250\346\226\207\346\241\243.md" | 193 ++++++++++
 ...275\277\347\224\250\346\226\207\346\241\243.md" | 389 ++++++++++++++++++++
 .../User_Manual/Linkis_Console_User_Manual.md      | 120 +++++++
 Linkis-Doc-master/zh_CN/User_Manual/README.md      |   8 +
 src/App.vue                                        |   4 +-
 src/assets/image/incubator-logo.png                | Bin 0 -> 17961 bytes
 src/docs/deploy/distributed_en.md                  |  99 ++++-
 src/docs/deploy/distributed_zh.md                  | 101 +++++-
 src/docs/deploy/engins_en.md                       |  83 ++++-
 src/docs/deploy/engins_zh.md                       | 107 +++++-
 src/docs/deploy/linkis_en.md                       | 247 ++++++++++++-
 src/docs/deploy/structure_en.md                    | 199 +++++++++-
 src/docs/deploy/structure_zh.md                    | 187 +++++++++-
 src/docs/manual/CliManual_en.md                    | 191 ++++++++++
 src/docs/manual/CliManual_zh.md                    | 193 ++++++++++
 src/docs/manual/ConsoleUserManual_en.md            | 120 +++++++
 src/docs/manual/ConsoleUserManual_zh.md            | 120 +++++++
 src/docs/manual/HowToUse_en.md                     |  29 ++
 src/docs/manual/HowToUse_zh.md                     |  20 ++
 src/docs/manual/UserManual_en.md                   | 400 +++++++++++++++++++++
 src/docs/manual/UserManual_zh.md                   | 389 ++++++++++++++++++++
 src/pages/docs/index.vue                           | 121 ++++---
 src/pages/docs/manual/CliManual.vue                |  13 +
 src/pages/docs/manual/ConsoleUserManual.vue        |  13 +
 src/pages/docs/manual/HowToUse.vue                 |  13 +
 src/pages/docs/manual/UserManual.vue               |  13 +
 src/pages/faq.vue                                  |   4 -
 src/pages/faq/faq_en.md                            | 255 +++++++++++++
 src/pages/faq/faq_zh.md                            | 257 +++++++++++++
 src/pages/faq/index.vue                            |  46 +++
 src/router.js                                      |  31 +-
 498 files changed, 15723 insertions(+), 63 deletions(-)

diff --git a/Linkis-Doc-master/LANGS.md b/Linkis-Doc-master/LANGS.md
new file mode 100644
index 0000000..5f72105
--- /dev/null
+++ b/Linkis-Doc-master/LANGS.md
@@ -0,0 +1,2 @@
+* [English](en_US)
+* [中文](zh_CN)
\ No newline at end of file
diff --git a/Linkis-Doc-master/README.md b/Linkis-Doc-master/README.md
new file mode 100644
index 0000000..bc802e0
--- /dev/null
+++ b/Linkis-Doc-master/README.md
@@ -0,0 +1,114 @@
+Linkis
+==========
+
+[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
+
+[English](README.md) | [中文](README_CN.md)
+
+# Introduction
+
+ Linkis builds a layer of computation middleware between upper applications and underlying engines. By using standard interfaces such as REST/WS/JDBC provided by Linkis, the upper applications can easily access the underlying engines such as MySQL/Spark/Hive/Presto/Flink, etc., and achieve the intercommunication of user resources like unified variables, scripts, UDFs, functions and resource files at the same time.
+
+As a computation middleware, Linkis provides powerful connectivity, reuse, orchestration, expansion, and governance capabilities. By decoupling the application layer and the engine layer, it simplifies the complex network call relationship, and thus reduces the overall complexity and saves the development and maintenance costs as well.
+
+Since the first release of Linkis in 2019, it has accumulated more than **700** trial companies and **1000+** sandbox trial users, which involving diverse industries, from finance, banking, tele-communication, to manufactory, internet companies and so on. Lots of companies have already used Linkis as a unified entrance for the underlying computation and storage engines of the big data platform.
+
+
+![linkis-intro-01](https://user-images.githubusercontent.com/11496700/84615498-c3030200-aefb-11ea-9b16-7e4058bf6026.png)
+
+![linkis-intro-03](https://user-images.githubusercontent.com/11496700/84615483-bb435d80-aefb-11ea-81b5-67f62b156628.png)
+
+# Features
+
+- **Support for diverse underlying computation storage engines**.  
+    Currently supported computation/storage engines: Spark, Hive, Python, Presto, ElasticSearch, MLSQL, TiSpark, JDBC, Shell, etc;      
+    Computation/storage engines to be supported: Flink, Impala, etc;      
+    Supported scripting languages: SparkSQL, HiveQL, Python, Shell, Pyspark, R, Scala and JDBC, etc.  
+  
+- **Powerful task/request governance capabilities**. With services such as Orchestrator, Label Manager and customized Spring Cloud Gateway, Linkis is able to provide multi-level labels based, cross-cluster/cross-IDC fine-grained routing, load balance, multi-tenancy, traffic control, resource control, and orchestration strategies like dual-active, active-standby, etc.  
+
+- **Support full stack computation/storage engine**. As a computation middleware, it will receive, execute and manage tasks and requests for various computation storage engines, including batch tasks, interactive query tasks, real-time streaming tasks and storage tasks;
+
+- **Resource management capabilities**.  ResourceManager is not only capable of managing resources for Yarn and Linkis EngineManger as in Linkis 0.X, but also able to provide label-based multi-level resource allocation and recycling, allowing itself to have powerful resource management capabilities across mutiple Yarn clusters and mutiple computation resource types;
+
+- **Unified Context Service**. Generate Context ID for each task/request,  associate and manage user and system resource files (JAR, ZIP, Properties, etc.), result set, parameter variable, function, etc., across user, system, and computing engine. Set in one place, automatic reference everywhere;
+
+- **Unified materials**. System and user-level unified material management, which can be shared and transferred across users and systems.
+
+# Supported engine types
+
+| **Engine** | **Supported Version** | **Linkis 0.X version requirement**| **Linkis 1.X version requirement** | **Description** |
+|:---- |:---- |:---- |:---- |:---- |
+|Flink |1.11.0|\>=dev-0.12.0, PR #703 not merged yet.|ongoing|	Flink EngineConn. Supports FlinkSQL code, and also supports Flink Jar to Linkis Manager to start a new Yarn application.|
+|Impala|\>=3.2.0, CDH >=6.3.0"|\>=dev-0.12.0, PR #703 not merged yet.|ongoing|Impala EngineConn. Supports Impala SQL.|
+|Presto|\>= 0.180|\>=0.11.0|ongoing|Presto EngineConn. Supports Presto SQL.|
+|ElasticSearch|\>=6.0|\>=0.11.0|ongoing|ElasticSearch EngineConn. Supports SQL and DSL code.|
+|Shell|Bash >=2.0|\>=0.9.3|\>=1.0.0_rc1|Shell EngineConn. Supports shell code.|
+|MLSQL|\>=1.1.0|\>=0.9.1|ongoing|MLSQL EngineConn. Supports MLSQL code.|
+|JDBC|MySQL >=5.0, Hive >=1.2.1|\>=0.9.0|\>=1.0.0_rc1|JDBC EngineConn. Supports MySQL and HiveQL code.|
+|Spark|Apache 2.0.0~2.4.7, CDH >=5.4.0|\>=0.5.0|\>=1.0.0_rc1|Spark EngineConn. Supports SQL, Scala, Pyspark and R code.|
+|Hive|Apache >=1.0.0, CDH >=5.4.0|\>=0.5.0|\>=1.0.0_rc1|Hive EngineConn. Supports HiveQL code.|
+|Hadoop|Apache >=2.6.0, CDH >=5.4.0|\>=0.5.0|ongoing|Hadoop EngineConn. Supports Hadoop MR/YARN application.|
+|Python|\>=2.6|\>=0.5.0|\>=1.0.0_rc1|Python EngineConn. Supports python code.|
+|TiSpark|1.1|\>=0.5.0|ongoing|TiSpark EngineConn. Support querying TiDB data by SparkSQL.|
+
+# Download
+
+Please go to the [Linkis releases page](https://github.com/WeBankFinTech/Linkis/wiki/Linkis-Releases) to download a compiled distribution or a source code package of Linkis.
+
+# Compile and deploy
+Please follow [Compile Guide](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Development_Documents/Linkis%E7%BC%96%E8%AF%91%E6%96%87%E6%A1%A3.md) to compile Linkis from source code.  
+Please refer to [Deployment_Documents](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Deployment_Documents) to do the deployment. 
+
+# Examples and Guidance
+You can find examples and guidance for how to use and manage Linkis in [User_Manual](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/User_Manual), [Engine_Usage_Documents](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Engine_Usage_Documentations) and [API_Documents](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/API_Documentations).
+
+# Documentation
+
+The documentation of linkis is in [Linkis-Doc](https://github.com/WeBankFinTech/Linkis-Doc) or in the [wiki](https://github.com/WeBankFinTech/Linkis/wiki).
+
+# Architecture
+Linkis services could be divided into three categories: computation governance services, public enhancement services and microservice governance services.  
+- The computation governance services, support the 3 major stages of processing a task/request: submission -> preparation -> execution;  
+- The public enhancement services, including the material library service, context service, and data source service;  
+- The microservice governance services, including Spring Cloud Gateway, Eureka and Open Feign.
+
+Below is the Linkis architecture diagram. You can find more detailed architecture docs in [Linkis-Doc/Architecture](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Architecture_Documents).
+![architecture](en_US/Images/Linkis_1.0_architecture.png)
+
+Based on Linkis the computation middleware, we've built a lot of applications and tools on top of it in the big data platform suite [WeDataSphere](https://github.com/WeBankFinTech/WeDataSphere). Below are the currently available open-source projects.
+
+![wedatasphere_stack_Linkis](en_US/Images/wedatasphere_stack_Linkis.png)
+
+- [**DataSphere Studio** - Data Application Integration& Development Framework](https://github.com/WeBankFinTech/DataSphereStudio)
+
+- [**Scriptis** - Data Development IDE Tool](https://github.com/WeBankFinTech/Scriptis)
+
+- [**Visualis** - Data Visualization Tool](https://github.com/WeBankFinTech/Visualis)
+
+- [**Schedulis** - Workflow Task Scheduling Tool](https://github.com/WeBankFinTech/Schedulis)
+
+- [**Qualitis** - Data Quality Tool](https://github.com/WeBankFinTech/Qualitis)
+
+- [**MLLabis** - Machine Learning Notebook IDE](https://github.com/WeBankFinTech/prophecis)
+
+More projects upcoming, please stay tuned.
+
+# Contributing
+
+Contributions are always welcomed, we need more contributors to build Linkis together. either code, or doc, or other supports that could help the community.  
+For code and documentation contributions, please follow the [contribution guide](https://github.com/WeBankFinTech/Linkis/blob/master/Contributing_CN.md).
+
+# Contact Us
+
+Any questions or suggestions please kindly submit an issue.  
+You can scan the QR code below to join our WeChat and QQ group to get more immediate response.
+
+![introduction05](en_US/Images/wedatasphere_contact_01.png)
+
+Meetup videos on [Bilibili](https://space.bilibili.com/598542776?from=search&seid=14344213924133040656).
+
+# Who is Using Linkis
+
+We opened [an issue](https://github.com/WeBankFinTech/Linkis/issues/23) for users to feedback and record who is using Linkis.  
+Since the first release of Linkis in 2019, it has accumulated more than **700** trial companies and **1000+** sandbox trial users, which involving diverse industries, from finance, banking, tele-communication, to manufactory, internet companies and so on.
\ No newline at end of file
diff --git a/Linkis-Doc-master/README_CN.md b/Linkis-Doc-master/README_CN.md
new file mode 100644
index 0000000..e926d6e
--- /dev/null
+++ b/Linkis-Doc-master/README_CN.md
@@ -0,0 +1,105 @@
+Linkis
+============
+
+[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
+
+[English](README.md) | [中文](README_CN.md)
+
+# 介绍
+
+Linkis 在上层应用程序和底层引擎之间构建了一层计算中间件。通过使用Linkis 提供的REST/WebSocket/JDBC 等标准接口,上层应用可以方便地连接访问MySQL/Spark/Hive/Presto/Flink 等底层引擎,同时实现变量、脚本、函数和资源文件等用户资源的跨上层应用互通。  
+作为计算中间件,Linkis 提供了强大的连通、复用、编排、扩展和治理管控能力。通过计算中间件将应用层和引擎层解耦,简化了复杂的网络调用关系,降低了整体复杂度,同时节约了整体开发和维护成本。  
+Linkis 自2019年开源发布以来,已累计积累了700多家试验企业和1000+沙盒试验用户,涉及金融、电信、制造、互联网等多个行业。许多公司已经将Linkis 作为大数据平台底层计算存储引擎的统一入口,和计算请求/任务的治理管控利器。
+
+![没有Linkis 之前](zh_CN/Images/before_linkis_cn.png)
+
+![有了Linkis 之后](zh_CN/Images/after_linkis_cn.png)
+
+# 核心特点
+
+- **丰富的底层计算存储引擎支持**。  
+    **目前支持的计算存储引擎**:Spark、Hive、Python、Presto、ElasticSearch、MLSQL、TiSpark、JDBC和Shell等。  
+    **正在支持中的计算存储引擎**:Flink、Impala等。  
+    **支持的脚本语言**:SparkSQL, HiveQL, Python, Shell, Pyspark, R, Scala 和JDBC 等。    
+- **强大的计算治理能力**。基于Orchestrator、Label Manager和定制的Spring Cloud Gateway等服务,Linkis能够提供基于多级标签的跨集群/跨IDC 细粒度路由、负载均衡、多租户、流量控制、资源控制和编排策略(如双活、主备等)支持能力。  
+- **全栈计算存储引擎架构支持**。能够接收、执行和管理针对各种计算存储引擎的任务和请求,包括离线批量任务、交互式查询任务、实时流式任务和存储型任务;
+- **资源管理能力**。 ResourceManager 不仅具备 Linkis0.X 对 Yarn 和 Linkis EngineManager 的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让 ResourceManager 具备跨集群、跨计算资源类型的强大资源管理能力。
+- **统一上下文服务**。为每个计算任务生成context id,跨用户、系统、计算引擎的关联管理用户和系统资源文件(JAR、ZIP、Properties等),结果集,参数变量,函数等,一处设置,处处自动引用;
+- **统一物料**。系统和用户级物料管理,可分享和流转,跨用户、系统共享物料。
+
+# 支持的引擎类型
+
+| **引擎** | **引擎版本** | **Linkis 0.X 版本要求**| **Linkis 1.X 版本要求** | **说明** |
+|:---- |:---- |:---- |:---- |:---- |
+|Flink |1.11.0|\>=dev-0.12.0, PR #703 尚未合并|ongoing|	Flink EngineConn。支持FlinkSQL 代码,也支持以Flink Jar 形式启动一个新的Yarn 应用程序。|
+|Impala|\>=3.2.0, CDH >=6.3.0"|\>=dev-0.12.0, PR #703 尚未合并|ongoing|Impala EngineConn. 支持Impala SQL 代码.|
+|Presto|\>= 0.180|\>=0.11.0|ongoing|Presto EngineConn. 支持Presto SQL 代码.|
+|ElasticSearch|\>=6.0|\>=0.11.0|ongoing|ElasticSearch EngineConn. 支持SQL 和DSL 代码.|
+|Shell|Bash >=2.0|\>=0.9.3|\>=1.0.0_rc1|Shell EngineConn. 支持Bash shell 代码.|
+|MLSQL|\>=1.1.0|\>=0.9.1|ongoing|MLSQL EngineConn. 支持MLSQL 代码.|
+|JDBC|MySQL >=5.0, Hive >=1.2.1|\>=0.9.0|\>=1.0.0_rc1|JDBC EngineConn. 已支持MySQL 和HiveQL,可快速扩展支持其他有JDBC Driver 包的引擎, 如Oracle.
+|Spark|Apache 2.0.0~2.4.7, CDH >=5.4.0|\>=0.5.0|\>=1.0.0_rc1|Spark EngineConn. 支持SQL, Scala, Pyspark 和R 代码.|
+|Hive|Apache >=1.0.0, CDH >=5.4.0|\>=0.5.0|\>=1.0.0_rc1|Hive EngineConn. 支持HiveQL 代码.|
+|Hadoop|Apache >=2.6.0, CDH >=5.4.0|\>=0.5.0|ongoing|Hadoop EngineConn. 支持Hadoop MR/YARN application.|
+|Python|\>=2.6|\>=0.5.0|\>=1.0.0_rc1|Python EngineConn. 支持python 代码.|
+|TiSpark|1.1|\>=0.5.0|ongoing|TiSpark EngineConn. 支持用SparkSQL 查询TiDB.|
+
+# 下载
+
+请前往[Linkis releases 页面](https://github.com/WeBankFinTech/Linkis/wiki/Linkis-Releases) 下载Linkis 的已编译版本或源码包。
+
+# 编译和安装部署
+请参照[编译指引](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Development_Documents/Linkis%E7%BC%96%E8%AF%91%E6%96%87%E6%A1%A3.md) 来编译Linkis 源码。  
+请参考[安装部署文档](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Deployment_Documents) 来部署Linkis。
+
+# 示例和使用指引
+请到 [用户手册](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/User_Manual), [各引擎使用指引](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Engine_Usage_Documentations) 和[API 文档](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/API_Documentations) 中,查看如何使用和管理Linkis 的示例和指引。
+
+# 文档
+
+完整的Linkis 文档参见[Linkis-Doc](https://github.com/WeBankFinTech/Linkis-Doc) 或[wiki](https://github.com/WeBankFinTech/Linkis/wiki).  
+
+# 架构概要
+Linkis 基于微服务架构开发,其服务可以分为3类:计算治理服务、公共增强服务和微服务治理服务。  
+- 计算治理服务,支持计算任务/请求处理流程的3个主要阶段:提交->准备->执行;
+- 公共增强服务,包括上下文服务、物料管理服务及数据源服务等;
+- 微服务治理服务,包括定制化的Spring Cloud Gateway、Eureka、Open Feign。
+
+下面是Linkis 的架构概要图. 更多详细架构文档请见 [Linkis-Doc/Architecture](https://github.com/WeBankFinTech/Linkis-Doc/tree/master/zh_CN/Architecture_Documents).
+![architecture](en_US/Images/Linkis_1.0_architecture.png)
+
+基于Linkis 计算中间件,我们在大数据平台套件[WeDataSphere](https://github.com/WeBankFinTech/WeDataSphere) 中构建了许多应用和工具系统。下面是目前可用的开源项目。
+
+![wedatasphere_stack_Linkis](en_US/Images/wedatasphere_stack_Linkis.png)
+
+- [**DataSphere Studio** - 数据应用集成开发框架](https://github.com/WeBankFinTech/DataSphereStudio)
+
+- [**Scriptis** - 数据研发IDE工具](https://github.com/WeBankFinTech/Scriptis)
+
+- [**Visualis** - 数据可视化工具](https://github.com/WeBankFinTech/Visualis)
+
+- [**Schedulis** - 工作流调度工具](https://github.com/WeBankFinTech/Schedulis)
+
+- [**Qualitis** - 数据质量工具](https://github.com/WeBankFinTech/Qualitis)
+
+- [**MLLabis** - 容器化机器学习notebook 开发环境](https://github.com/WeBankFinTech/prophecis)
+
+更多项目开源准备中,敬请期待。
+
+# 贡献
+
+我们非常欢迎和期待更多的贡献者参与共建Linkis, 不论是代码、文档,或是其他能够帮助到社区的贡献形式。  
+代码和文档相关的贡献请参照[贡献指引](https://github.com/WeBankFinTech/Linkis/blob/master/Contributing_CN.md).
+
+# 联系我们
+
+对Linkis 的任何问题和建议,敬请提交issue,以便跟踪处理和经验沉淀共享。  
+您也可以扫描下面的二维码,加入我们的微信/QQ群,以获得更快速的响应。
+![introduction05](en_US/Images/wedatasphere_contact_01.png)
+
+Meetup 视频 [Bilibili](https://space.bilibili.com/598542776?from=search&seid=14344213924133040656).
+
+# 谁在使用Linkis
+
+我们创建了[一个 issue](https://github.com/WeBankFinTech/Linkis/issues/23) 以便用户反馈和记录谁在使用Linkis.  
+Linkis 自2019年开源发布以来,累计已有700多家试验企业和1000+沙盒试验用户,涉及金融、电信、制造、互联网等多个行业。
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/API_Documentations/JDBC_API_Document.md b/Linkis-Doc-master/en_US/API_Documentations/JDBC_API_Document.md
new file mode 100644
index 0000000..72b3f3a
--- /dev/null
+++ b/Linkis-Doc-master/en_US/API_Documentations/JDBC_API_Document.md
@@ -0,0 +1,45 @@
+# Task Submission And Execution Of JDBC API Documents
+### 1. Introduce Dependent Modules
+The first way depends on the JDBC module in the pom:  
+```xml
+<dependency>
+    <groupId>com.webank.wedatasphere.linkis</groupId>
+    <artifactId>linkis-ujes-jdbc</artifactId>
+    <version>${linkis.version}</version>
+ </dependency>
+```  
+**Note:** The module has not been deployed to the central warehouse. You need to execute `mvn install -Dmaven.test.skip=true` in the ujes/jdbc directory for local installation.
+
+**The second way is through packaging and compilation:**
+1. Enter the ujes/jdbc directory in the Linkis project and enter the command in the terminal to package `mvn assembly:assembly -Dmaven.test.skip=true`
+The packaging instruction skips the running of the unit test and the compilation of the test code, and packages the dependencies required by the JDBC module into the Jar package.  
+2. After the packaging is complete, two Jar packages will be generated in the target directory of JDBC. The one with dependencies in the Jar package name is the Jar package we need.  
+### Second, create a test category:
+Establish a Java test class LinkisClientImplTestJ, the specific interface meaning can be seen in the notes:  
+```java
+ public static void main(String[] args) throws SQLException, ClassNotFoundException {
+
+        //1. Load driver class:com.webank.wedatasphere.linkis.ujes.jdbc.UJESSQLDriver
+        Class.forName("com.webank.wedatasphere.linkis.ujes.jdbc.UJESSQLDriver");
+
+        //2. Get connection:jdbc:linkis://gatewayIP:gatewayPort
+        //   the front-end account password
+        Connection connection =  DriverManager.getConnection("jdbc:linkis://127.0.0.1:9001","username","password");
+
+        //3. Create statement and execute query
+        Statement st= connection.createStatement();
+        ResultSet rs=st.executeQuery("show tables");
+        //4. Processing the returned results of the database (using the ResultSet class)
+        while (rs.next()) {
+            ResultSetMetaData metaData = rs.getMetaData();
+            for (int i = 1; i <= metaData.getColumnCount(); i++) {
+                System.out.print(metaData.getColumnName(i) + ":" +metaData.getColumnTypeName(i)+": "+ rs.getObject(i) + "    ");
+            }
+            System.out.println();
+        }
+        // close resourse
+        rs.close();
+        st.close();
+        connection.close();
+    }
+```
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/API_Documentations/Linkis_task_submission_and_execution_RestAPI_document.md b/Linkis-Doc-master/en_US/API_Documentations/Linkis_task_submission_and_execution_RestAPI_document.md
new file mode 100644
index 0000000..a7fb568
--- /dev/null
+++ b/Linkis-Doc-master/en_US/API_Documentations/Linkis_task_submission_and_execution_RestAPI_document.md
@@ -0,0 +1,170 @@
+# Linkis Task submission and execution Rest API document
+
+- The return of the Linkis Restful interface follows the following standard return format:
+
+```json
+{
+ "method": "",
+ "status": 0,
+ "message": "",
+ "data": {}
+}
+```
+
+**Convention**:
+
+ - method: Returns the requested Restful API URI, which is mainly used in WebSocket mode.
+ - status: return status information, where: -1 means no login, 0 means success, 1 means error, 2 means verification failed, 3 means no access to the interface.
+ - data: return specific data.
+ - message: return the requested prompt message. If the status is not 0, the message returned is an error message, and the data may have a stack field, which returns specific stack information.
+ 
+For more information about the Linkis Restful interface specification, please refer to: [Linkis Restful Interface Specification](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Development_Documents/Development_Specification/API.md)
+
+### 1). Submit for execution
+
+- Interface `/api/rest_j/v1/entrance/execute`
+
+- Submission method `POST`
+
+```json
+{
+    "executeApplicationName": "hive", //Engine type
+    "requestApplicationName": "dss", //Client service type
+    "executionCode": "show tables",
+    "params": {"variable": {}, "configuration": {}},
+    "runType": "hql", //The type of script to run
+    "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
+}
+```
+
+- Interface `/api/rest_j/v1/entrance/submit`
+
+- Submission method `POST`
+
+```json
+{
+    "executionContent": {"code": "show tables", "runType": "sql"},
+    "params": {"variable": {}, "configuration": {}},
+    "source": {"scriptPath": "file:///mnt/bdp/hadoop/1.hql"},
+    "labels": {
+        "engineType": "spark-2.4.3",
+        "userCreator": "hadoop-IDE"
+    }
+}
+```
+
+
+-Return to example
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/execute",
+ "status": 0,
+ "message": "Request executed successfully",
+ "data": {
+   "execID": "030418IDEhivebdpdwc010004:10087IDE_hadoop_21",
+   "taskID": "123"
+ }
+}
+```
+
+- execID is the unique identification execution ID generated for the task after the user task is submitted to Linkis. It is of type String. This ID is only useful when the task is running, similar to the concept of PID. The design of ExecID is `(requestApplicationName length)(executeAppName length)(Instance length)${requestApplicationName}${executeApplicationName}${entranceInstance information ip+port}${requestApplicationName}_${umUser}_${index}`
+
+- taskID is the unique ID that represents the task submitted by the user. This ID is generated by the database self-increment and is of Long type
+
+
+### 2).Get status
+
+- Interface `/api/rest_j/v1/entrance/${execID}/status`
+
+- Submission method `GET`
+
+- Return to example
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/{execID}/status",
+ "status": 0,
+ "message": "Get status successful",
+ "data": {
+   "execID": "${execID}",
+   "status": "Running"
+ }
+}
+```
+
+### 3).Get logs
+
+- Interface `/api/rest_j/v1/entrance/${execID}/log?fromLine=${fromLine}&size=${size}`
+
+- Submission method `GET`
+
+- The request parameter fromLine refers to the number of lines from which to get, and size refers to the number of lines of logs that this request gets
+
+- Return example, where the returned fromLine needs to be used as a parameter for the next request of this interface
+
+```json
+{
+  "method": "/api/rest_j/v1/entrance/${execID}/log",
+  "status": 0,
+  "message": "Return log information",
+  "data": {
+    "execID": "${execID}",
+  "log": ["error log","warn log","info log", "all log"],
+  "fromLine": 56
+  }
+}
+```
+
+### 4). Get progress
+
+- Interface `/api/rest_j/v1/entrance/${execID}/progress`
+
+- Submission method `GET`<br>
+
+- Return to example
+
+```json
+{
+  "method": "/api/rest_j/v1/entrance/{execID}/progress",
+  "status": 0,
+  "message": "Return progress information",
+  "data": {
+    "execID": "${execID}",
+    "progress": 0.2,
+    "progressInfo": [
+        {
+        "id": "job-1",
+        "succeedTasks": 2,
+        "failedTasks": 0,
+        "runningTasks": 5,
+        "totalTasks": 10
+        },
+        {
+        "id": "job-2",
+        "succeedTasks": 5,
+        "failedTasks": 0,
+        "runningTasks": 5,
+        "totalTasks": 10
+        }
+    ]
+  }
+}
+```
+
+### 5).kill task
+
+- Interface `/api/rest_j/v1/entrance/${execID}/kill`
+
+- Submission method `POST`
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/{execID}/kill",
+ "status": 0,
+ "message": "OK",
+ "data": {
+   "execID":"${execID}"
+  }
+}
+```
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/API_Documentations/Login_API.md b/Linkis-Doc-master/en_US/API_Documentations/Login_API.md
new file mode 100644
index 0000000..be7e504
--- /dev/null
+++ b/Linkis-Doc-master/en_US/API_Documentations/Login_API.md
@@ -0,0 +1,125 @@
+# Login Document
+## 1. Docking With LDAP Service
+
+Enter the /conf/linkis-spring-cloud-services/linkis-mg-gateway directory and execute the command:  
+```bash
+    vim linkis-server.properties
+```    
+
+Add LDAP related configuration:  
+```bash
+wds.linkis.ldap.proxy.url=ldap://127.0.0.1:389/ #LDAP service URL
+wds.linkis.ldap.proxy.baseDN=dc=webank,dc=com #Configuration of LDAP service    
+```    
+
+## 2. How To Open The Test Mode To Achieve Login-Free
+
+Enter the /conf/linkis-spring-cloud-services/linkis-mg-gateway directory and execute the command:
+```bash
+    vim linkis-server.properties
+```
+    
+    
+Turn on the test mode and the parameters are as follows:
+```bash
+    wds.linkis.test.mode=true   # Open test mode
+    wds.linkis.test.user=hadoop  # Specify which user to delegate all requests to in test mode
+```
+
+## 3.Log In Interface Summary
+We provide the following login-related interfaces:
+ - [Login In](#1LoginIn)
+
+ - [Login Out](#2LoginOut)
+
+ - [Heart Beat](#3HeartBeat)
+ 
+
+## 4. Interface details
+
+- The return of the Linkis Restful interface follows the following standard return format:
+
+```json
+{
+ "method": "",
+ "status": 0,
+ "message": "",
+ "data": {}
+}
+```
+
+**Protocol**:
+
+- method: Returns the requested Restful API URI, which is mainly used in WebSocket mode.
+- status: returns status information, where: -1 means no login, 0 means success, 1 means error, 2 means verification failed, 3 means no access to the interface.
+- data: return specific data.
+- message: return the requested prompt message. If the status is not 0, the message returns an error message, and the data may have a stack field, which returns specific stack information.
+ 
+For more information about the Linkis Restful interface specification, please refer to: [Linkis Restful Interface Specification](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/en_US/Development_Documents/Development_Specification/API.md)
+
+### 1). Login In
+
+- Interface `/api/rest_j/v1/user/login`
+
+- Submission method `POST`
+
+```json
+      {
+        "userName": "",
+        "password": ""
+      }
+```
+
+- Return to example
+
+```json
+    {
+        "method": null,
+        "status": 0,
+        "message": "login successful(登录成功)!",
+        "data": {
+            "isAdmin": false,
+            "userName": ""
+        }
+     }
+```
+
+Among them:
+
+-isAdmin: Linkis only has admin users and non-admin users. The only privilege of admin users is to support viewing the historical tasks of all users in the Linkis management console.
+
+### 2). Login Out
+
+- Interface `/api/rest_j/v1/user/logout`
+
+- Submission method `POST`
+
+  No parameters
+
+- Return to example
+
+```json
+    {
+        "method": "/api/rest_j/v1/user/logout",
+        "status": 0,
+        "message": "退出登录成功!"
+    }
+```
+
+### 3). Heart Beat
+
+- Interface `/api/rest_j/v1/user/heartbeat`
+
+- Submission method `POST`
+
+  No parameters
+
+- Return to example
+
+```json
+    {
+         "method": "/api/rest_j/v1/user/heartbeat",
+         "status": 0,
+         "message": "维系心跳成功!"
+    }
+```
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/API_Documentations/README.md b/Linkis-Doc-master/en_US/API_Documentations/README.md
new file mode 100644
index 0000000..387b794
--- /dev/null
+++ b/Linkis-Doc-master/en_US/API_Documentations/README.md
@@ -0,0 +1,8 @@
+## 1. Document description
+Linkis1.0 has been refactored and optimized on the basis of Linkix0.x, and it is also compatible with the 0.x interface. However, in order to prevent compatibility problems when using version 1.0, you need to read the following documents carefully:
+
+1. When using Linkis1.0 for customized development, you need to use Linkis's authorization authentication interface. Please read [Login API Document](Login_API.md) carefully.
+
+2. Linkis1.0 provides a JDBC interface. You need to use JDBC to access Linkis. Please read [Task Submit and Execute JDBC API Document](JDBC_API.md).
+
+3. Linkis1.0 provides the Rest interface. If you need to develop upper-level applications on the basis of Linkis, please read [Task Submit and Execute Rest API Document](Linkis_task_submission_and_execution_RestAPI_document.md).
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md
new file mode 100644
index 0000000..d600a5f
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md
@@ -0,0 +1,99 @@
+EngineConn architecture design
+==================
+
+EngineConn: Engine connector, a module that provides functions such as unified configuration management, context service, physical library, data source management, micro service management, and historical task query for other micro service modules.
+
+EngineConn architecture diagram
+
+![EngineConn](../../../Images/Architecture/EngineConn/engineconn-01.png)
+
+Introduction to the second-level module:
+==============
+
+linkis-computation-engineconn interactive engine connector
+---------------------------------------------
+
+The ability to provide interactive computing tasks.
+
+| Core class               | Core function                                                   |
+|----------------------|------------------------------------------------------------|
+| EngineConnTask       | Defines the interactive computing tasks submitted to EngineConn                     |
+| ComputationExecutor  | Defined interactive Executor, with interactive capabilities such as status query and task kill. |
+| TaskExecutionService | Provides management functions for interactive computing tasks                             |
+
+linkis-engineconn-common engine connector common module
+--------------------------------------------
+
+Define the most basic entity classes and interfaces in the engine connector. EngineConn is used to create a connection session Session for the underlying computing storage engine, which contains the session information between the engine and the specific cluster, and is the client that communicates with the specific engine.
+
+| Core Service           | Core function                                                             |
+|-----------------------|----------------------------------------------------------------------|
+| EngineCreationContext | Contains the context information of EngineConn during startup                               |
+| EngineConn            | Contains the specific information of EngineConn, such as type, specific connection information with layer computing storage engine, etc. |
+| EngineExecution       | Provide Executor creation logic                                               |
+| EngineConnHook        | Define the operations before and after each phase of engine startup                                       |
+
+The core logic of linkis-engineconn-core engine connector
+------------------------------------------
+
+Defines the interfaces involved in the core logic of EngineConn.
+
+| Core class            | Core function                           |
+|-------------------|------------------------------------|
+| EngineConnManager | Provide related interfaces for creating and obtaining EngineConn |
+| ExecutorManager   | Provide related interfaces for creating and obtaining Executor   |
+| ShutdownHook      | Define the operation of the engine shutdown phase             |
+
+linkis-engineconn-launch engine connector startup module
+------------------------------------------
+
+Defines the logic of how to start EngineConn.
+
+| Core class           | core function                 |
+|------------------|--------------------------|
+| EngineConnServer | EngineConn microservice startup class |
+
+The core logic of the linkis-executor-core executor
+------------------------------------
+
+>   Defines the core classes related to the actuator. The executor is a real computing scene executor, responsible for submitting user code to EngineConn.
+
+| Core class                 | Core function                                                   |
+|----------------------------|------------------------------------------------------------|
+| Executor | It is the actual computational logic execution unit and provides a top-level abstraction of the various capabilities of the engine. |
+| EngineConnAsyncEvent | Defines EngineConn-related asynchronous events |
+| EngineConnSyncEvent | Defines EngineConn-related synchronization events |
+| EngineConnAsyncListener | Defines EngineConn related asynchronous event listener |
+| EngineConnSyncListener | Defines EngineConn related synchronization event listener |
+| EngineConnAsyncListenerBus | Defines the listener bus for EngineConn asynchronous events |
+| EngineConnSyncListenerBus | Defines the listener bus for EngineConn synchronization events |
+| ExecutorListenerBusContext | Defines the context of the EngineConn event listener |
+| LabelService | Provide label reporting function |
+| ManagerService | Provides the function of information transfer with LinkisManager |
+
+linkis-callback-service callback logic
+-------------------------------
+
+| Core Class         | Core Function |
+|--------------------|--------------------------|
+| EngineConnCallback | Define EngineConn's callback logic |
+
+linkis-accessible-executor can be accessed executor
+--------------------------------------------
+
+Executor that can be accessed. You can interact with it through RPC requests to get its status, load, concurrency and other basic indicators Metrics data.
+
+
+| Core Class               | Core Function                                   |
+|--------------------------|-------------------------------------------------|
+| LogCache | Provide log cache function |
+| AccessibleExecutor | The Executor that can be accessed can interact with it through RPC requests. |
+| NodeHealthyInfoManager | Manage Executor's Health Information |
+| NodeHeartbeatMsgManager | Manage the heartbeat information of Executor |
+| NodeOverLoadInfoManager | Manage Executor load information |
+| Listener | Provides events related to Executor and the corresponding listener definition |
+| EngineConnTimedLock | Define Executor level lock |
+| AccessibleService | Provides the start-stop and status acquisition functions of Executor |
+| ExecutorHeartbeatService | Provides heartbeat related functions of Executor |
+| LockService | Provide lock management function |
+| LogService | Provide log management functions |
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-01.png b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-01.png
new file mode 100644
index 0000000..cc83842
Binary files /dev/null and b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-01.png differ
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-02.png b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-02.png
new file mode 100644
index 0000000..303f37a
Binary files /dev/null and b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM-02.png differ
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md
new file mode 100644
index 0000000..45ded41
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md
@@ -0,0 +1,45 @@
+EngineConnManager architecture design
+-------------------------
+
+EngineConnManager (ECM): EngineConn's manager, provides engine lifecycle management, and reports load information and its own health status to RM.
+###  ECM architecture
+
+![](Images/ECM-01.png)
+
+###  Introduction to the second-level module
+
+**Linkis-engineconn-linux-launch**
+
+The engine launcher, whose core class is LinuxProcessEngineConnLauch, is used to provide instructions for executing commands.
+
+**Linkis-engineconn-manager-core**
+
+The core module of ECM includes the top-level interface of ECM health report and EngineConn health report function, defines the relevant indicators of ECM service, and the core method of constructing EngineConn process.
+
+| Core top-level interface/class     | Core function                                                            |
+|------------------------------------|--------------------------------------------------------------------------|
+| EngineConn                         | Defines the properties of EngineConn, including methods and parameters   |
+| EngineConnLaunch                   | Define the start method and stop method of EngineConn                    |
+| ECMEvent                           | ECM related events are defined                                           |
+| ECMEventListener                   | Defined ECM related event listeners                                      |
+| ECMEventListenerBus                | Defines the listener bus of ECM                                          |
+| ECMMetrics                         | Defines the indicator information of ECM                                 |
+| ECMHealthReport                    | Defines the health report information of ECM                             |
+| NodeHealthReport                   | Defines the health report information of the node                        |
+
+**Linkis-engineconn-manager-server**
+
+The server side of ECM defines top-level interfaces and implementation classes such as ECM health information processing service, ECM indicator information processing service, ECM registration service, EngineConn start service, EngineConn stop service, EngineConn callback service, etc., which are mainly used for ECM to itself and EngineConn Life cycle management, health information reporting, heartbeat sending, etc.
+Core Service and Features module are as follows:
+
+| Core service                    | Core function                                        |
+|---------------------------------|-------------------------------------------------|
+| EngineConnLaunchService         | Contains core methods for generating EngineConn and starting the process          |
+| BmlResourceLocallizationService | Used to download BML engine related resources and generate localized file directory |
+| ECMHealthService                | Report your own healthy heartbeat to AM regularly                      |
+| ECMMetricsService               | Report your own indicator status to AM regularly                      |
+| EngineConnKillSerivce           | Provides related functions to stop the engine                          |
+| EngineConnListService           | Provide caching and management engine related functions                    |
+| EngineConnCallBackService       | Provide the function of the callback engine                              |
+
+
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md
new file mode 100644
index 0000000..dc82f80
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md
@@ -0,0 +1,68 @@
+EngineConnPlugin (ECP) architecture design
+===============================
+
+The engine connector plug-in is an implementation that can dynamically load the engine connector and reduce the occurrence of version conflicts. It has the characteristics of convenient expansion, fast refresh, and selective loading. In order to allow developers to freely extend Linkis's Engine engine, and dynamically load engine dependencies to avoid version conflicts, the EngineConnPlugin was designed and developed, allowing new engines to be introduced into the execution life cycle of [...]
+The plug-in interface disassembles the definition of the engine, including parameter initialization, allocation of engine resources, construction of engine connections, and setting of engine default tags.
+
+一、ECP architecture diagram
+
+![](../../../Images/Architecture/linkis-engineConnPlugin-01.png)
+
+Introduction to the second-level module:
+==============
+
+EngineConn-Plugin-Server
+------------------------
+
+The engine connector plug-in service is an entrance service that provides external registration plug-ins, management plug-ins, and plug-in resource construction. The engine plug-in that is successfully registered and loaded will contain the logic of resource allocation and startup parameter configuration. During the engine initialization process, EngineConn
+Other services such as Manager call the logic of the corresponding plug-in in Plugin Server through RPC requests.
+
+| Core Class                           | Core Function                              |
+|----------------------------------|---------------------------------------|
+| EngineConnLaunchService          | Responsible for building the engine connector launch request            |
+| EngineConnResourceFactoryService | Responsible for generating engine resources                      |
+| EngineConnResourceService        | Responsible for downloading the resource files used by the engine connector from BML |
+
+
+EngineConn-Plugin-Loader Engine Connector Plugin Loader
+---------------------------------------
+
+The engine connector plug-in loader is a loader used to dynamically load the engine connector plug-ins according to request parameters, and has the characteristics of caching. The specific loading process is mainly composed of two parts: 1) Plug-in resources such as the main program package and program dependency packages are loaded locally (not open). 2) Plug-in resources are dynamically loaded from the local into the service process environment, for example, loaded into the JVM virtual [...]
+| Core Class                          | Core Function                                     |
+|---------------------------------|----------------------------------------------|
+| EngineConnPluginsResourceLoader | Load engine connector plug-in resources                       |
+| EngineConnPluginsLoader         | Load the engine connector plug-in instance, or load an existing one from the cache |
+| EngineConnPluginClassLoader     | Dynamically instantiate engine connector instance from jar              |
+
+EngineConn-Plugin-Cache engine plug-in cache module
+----------------------------------------
+
+Engine connector plug-in cache is a cache service specially used to cache loaded engine connectors, and supports the ability to read, update, and remove. The plug-in that has been loaded into the service process will be cached together with its class loader to prevent multiple loading from affecting efficiency; at the same time, the cache module will periodically notify the loader to update the plug-in resources. If changes are found, it will be reloaded and refreshed automatically Cache.
+
+| Core Class                      | Core Function                     |
+|-----------------------------|------------------------------|
+| EngineConnPluginCache       | Cache loaded engine connector instance |
+| RefreshPluginCacheContainer | Engine connector that refreshes the cache regularly     |
+
+EngineConn-Plugin-Core: Engine connector plug-in core module
+---------------------------------------------
+
+The engine connector plug-in core module is the core module of the engine connector plug-in. Contains the implementation of the basic functions of the engine plug-in, such as the construction of the engine connector start command, the construction of the engine resource factory and the implementation of the core interface of the engine connector plug-in.
+| Core Class                  | Core Function                                                 |
+|-------------------------|----------------------------------------------------------|
+| EngineConnLaunchBuilder | Build Engine Connector Launch Request                                   |
+| EngineConnFactory       | Create Engine Connector                                           |
+| EngineConnPlugin        | The engine connector plug-in implements the interface, including resources, commands, and instance construction methods. |
+| EngineResourceFactory   | Engine Resource Creation Factory                                       |
+
+EngineConn-Plugins: Engine connection plugin collection
+-----------------------------------
+
+The engine connection plug-in collection is used to place the default engine connector plug-in library that has been implemented based on the plug-in interface defined by us. Provides the default engine connector implementation, such as jdbc, spark, python, shell, etc. Users can refer to the implemented cases based on their own needs to implement more engine connectors.
+| Core Class              | Core Function         |
+|---------------------|------------------|
+| engineplugin-jdbc   | jdbc engine connector   |
+| engineplugin-shell  | Shell engine connector  |
+| engineplugin-spark  | spark engine connector  |
+| engineplugin-python | python engine connector |
+
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md
new file mode 100644
index 0000000..dd69274
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md
@@ -0,0 +1,33 @@
+## 1. Background
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The Entrance module of the old version of Linkis is responsible for too much responsibilities, the management ability of the Engine is weak, and it is not easy to follow-up expansion, the AppManager module is newly extracted to complete the following responsibilities:  
+1. Add the AM module to move the engine management function previously done by Entrance to the AM module.
+2. AM needs to support operating Engine, including: adding, multiplexing, recycling, preheating, switching and other functions.
+3. Need to connect to the Manager module to provide Engine management functions: including Engine status maintenance, engine list maintenance, engine information, etc.
+4. AM needs to manage EM services, complete EM registration and forward the resource registration to RM.
+5. AM needs to be connected to the Label module, including the addition and deletion of EM/Engine, the label manager needs to be notified to update the label.
+6. AM also needs to dock the label module for label analysis, and need to obtain a list of serverInstances with a series of scores through a series of labels (How to distinguish between EM and Engine? the labels are completely different).
+7. Need to provide external basic interface: including the addition, deletion and modification of engine and engine manager, metric query, etc.  
+## Architecture diagram
+![AppManager03](./../../../../zh_CN/Images/Architecture/AppManager-03.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As shown in the figure above: AM belongs to the AppManager module in LinkisMaster and provides services.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;New engine application flow chart:  
+![AppManager02](./../../../../zh_CN/Images/Architecture/AppManager-02.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;From the above engine life cycle flow chart, it can be seen that Entrance is no longer doing the management of the Engine, and the startup and management of the engine are controlled by AM.  
+## Architecture description
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;AppManager mainly includes engine service and EM service:
+Engine service includes all operations related to EngineConn, such as engine creation, engine reuse, engine switching, engine recycling, engine stopping, engine destruction, etc.
+EM service is responsible for information management of all EngineConnManager, and can perform service management on ECM online, including tag modification, suspension of ECM service, obtaining ECM instance information, obtaining ECM running engine information, killing ECM operation, and also according to EM Node information Query all EngineNodes, and also support searching by user, saving EM Node load information, node health information, resource usage information, etc.
+The new EngineConnManager and EngineConn both support tag management, and the types of engines have also added offline, streaming, and interactive support.  
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Engine creation: specifically responsible for the new engine function of the LinkisManager service. The engine startup module is fully responsible for the creation of a new engine, including obtaining ECM tag collections, resource requests, obtaining engine startup commands, notifying ECM to create new engines, updating engine lists, etc.
+CreateEngienRequest->RPC/Rest -> MasterEventHandler ->CreateEngineService ->
+->LabelContext/EnginePlugin/RMResourcevice->(RcycleEngineService)EngineNodeManager->EMNodeManager->sender.ask(EngineLaunchRequest)->EngineManager service->EngineNodeManager->EngineLocker->Engine->EngineNodeManager->EngineFactory=&gt;EngineService=&gt; ServerInstance
+When creating an engine is the part that interacts with RM, EnginePlugin should return specific resource types through Labels, and then AM sends resource requests to RM.
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Engine reuse: In order to reduce the time and resources consumed for engine startup, the principle of reuse must be given priority to the use of engines. Reuse generally refers to the reuse of engines that users have created. The engine reuse module is responsible for providing a collection of reusable engines. Election and lock the engine and start using it, or return that there is no engine that can be reused.
+ReuseEngienRequest->RPC/Rest -> MasterEventHandler ->ReuseEngineService ->
+->abelContext->EngineNodeManager->EngineSelector->EngineLocker->Engine->EngineNodeManager->EngineReuser->EngineService=>ServerInstance
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Engine switching: It mainly refers to the label switching of existing engines. For example, when the engine is created, it was created by Creator1. Now it can be changed to Creator2 by engine switching. At this time, you can allow the current engine to receive tasks with the tag Creator2.
+SwitchEngienRequest->RPC/Rest -> MasterEventHandler ->SwitchEngineService ->LabelContext/EnginePlugin/RMResourcevice->EngineNodeManager->EngineLocker->Engine->EngineNodeManager->EngineReuser->EngineService=>ServerInstance.  
+Engine manager: Engine manager is responsible for managing the basic information and metadata information of all engines.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md
new file mode 100644
index 0000000..d8fa39c
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md
@@ -0,0 +1,38 @@
+## LabelManager architecture design
+
+#### Brief description
+LabelManager is a functional module in Linkis that provides label services to upper-level applications. It uses label technology to manage cluster resource allocation, service node election, user permission matching, and gateway routing and forwarding; it includes generalized analysis and processing tools that support various custom Label labels, And a universal tag matching scorer.
+### Overall architecture schematic
+
+![整体架构示意图](../../../Images/Architecture/LabelManager/label_manager_global.png)  
+
+#### Architecture description
+- LabelBuilder: Responsible for the work of label analysis. It can parse the input label type, keyword or character value to obtain a specific label entity. There is a default generalization implementation class or custom extensions.
+- LabelEntities: Refers to a collection of label entities, including cluster labels, configuration labels, engine labels, node labels, routing labels, search labels, etc.
+- NodeLabelService: The associated service interface class of instance/node and label, which defines the interface method of adding, deleting, modifying and checking the relationship between the two and matching the instance/node according to the label.
+- UserLabelService: Declare the associated operation between the user and the label.
+- ResourceLabelService: Declare the associated operations of cluster resources and labels, involving resource management of combined labels, cleaning or setting the resource value associated with the label.
+- NodeLabelScorer: Node label scorer, corresponding to the implementation of different label matching algorithms, using scores to indicate node label matching.
+
+### 1. LabelBuilder parsing process
+Take the generic label analysis class GenericLabelBuilder as an example to clarify the overall process:
+The process of label parsing/construction includes several steps:
+1. According to the input, select the appropriate label class to be parsed.
+2. According to the definition information of the tag class, recursively analyze the generic structure to obtain the specific tag value type.
+3. Convert the input value object to the tag value type, using implicit conversion or positive and negative analysis framework.
+4. According to the return of 1-3, instantiate the label, and perform some post operations according to different label classes.
+
+### 2. NodeLabelScorer scoring process
+In order to select a suitable engine node based on the tag list attached to the Linkis user execution request, it is necessary to make a selection of the matching engine list, which is quantified as the tag matching degree of the engine node, that is, the score.
+In the label definition, each label has a feature value, namely CORE, SUITABLE, PRIORITIZED, OPTIONAL, and each feature value has a boost value, which is equivalent to a weight and an incentive value.
+At the same time, some features such as CORE and SUITABLE must be unique features, that is, strong filtering is required during the matching process, and a node can only be associated with one CORE/SUITABLE label.
+According to the relationship between existing tags, nodes, and request attached tags, the following schematic diagram can be drawn:
+![标签打分](../../../Images/Architecture/LabelManager/label_manager_scorer.png)  
+
+The built-in default scoring logic process should generally include the following steps:
+1. The input of the method should be two sets of network relationship lists, namely `Label -> Node` and `Node -> Label`, where the Node node in the `Node -> Label` relationship must have all the CORE and SUITABLE feature labels, these nodes are also called candidate nodes.
+2. The first step is to traverse and calculate the relationship list of `Node -> Label`, and traverse the label Label associated with each node. In this step, the label is scored first. If the label is not the label attached to the request, the score is 0.
+Otherwise, the score is divided into: (basic score/the number of times the tag corresponds to the feature value in the request) * the incentive value of the corresponding feature value, where the basic score defaults to 1, and the initial score of the node is the sum of the associated tag scores; where because The CORE/SUITABLE type label must be the only label, and the number of occurrences is always 1.
+3. After obtaining the initial score of the node, the second step is to traverse the calculation of the `Label -> Node` relationship. Since the first step ignores the effect of unrequested attached labels on the score, the proportion of irrelevant labels will indeed affect the score. This type of label is unified with the UNKNOWN feature, and this feature also has a corresponding incentive value;
+We set that the higher the proportion of candidate nodes associated with irrelevant labels in the total associated nodes, the more significant the impact on the score, which can further accumulate the initial score of the node obtained in the first step.
+4. Normalize the standard deviation of the scores of the candidate nodes and sort them.
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md
new file mode 100644
index 0000000..d13e6b1
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md
@@ -0,0 +1,41 @@
+LinkisManager Architecture Design
+====================
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As an independent microservice of Linkis, LinkisManager provides AppManager (application management), ResourceManager (resource management), and LabelManager (label management) capabilities. It can support multi-active deployment and has the characteristics of high availability and easy expansion.  
+## 1. Architecture Diagram
+![Architecture Diagram](./../../../../zh_CN/Images/Architecture/LinkisManager/LinkisManager-01.png)  
+### Noun explanation
+- EngineConnManager (ECM): Engine Manager, used to start and manage engines.
+- EngineConn (EC): Engine connector, used to connect the underlying computing engine.
+- ResourceManager (RM): Resource Manager, used to manage node resources.
+## 2. Introduction to the second-level module
+### 2.1. Application management module linkis-application-manager
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;AppManager is used for unified scheduling and management of engines:  
+| Core Interface/Class | Main Function |
+|------------|--------|
+|EMInfoService | Defines EngineConnManager information query and modification functions |
+|EMRegisterService| Defines EngineConnManager registration function |
+|EMEngineService | Defines EngineConnManager's creation, query, and closing functions of EngineConn |
+|EngineAskEngineService | Defines the function of querying EngineConn |
+|EngineConnStatusCallbackService | Defines the function of processing EngineConn status callbacks |
+|EngineCreateService | Defines the function of creating EngineConn |
+|EngineInfoService | Defines EngineConn query function |
+|EngineKillService | Defines the stop function of EngineConn |
+|EngineRecycleService | Defines the recycling function of EngineConn |
+|EngineReuseService | Defines the reuse function of EngineConn |
+|EngineStopService | Defines the self-destruct function of EngineConn |
+|EngineSwitchService | Defines the engine switching function |
+|AMHeartbeatService | Provides EngineConnManager and EngineConn node heartbeat processing functions |
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The process of applying for an engine through AppManager is as follows:  
+![AppManager](./../../../../zh_CN/Images/Architecture/LinkisManager/AppManager-01.png)  
+### 2. Label management module linkis-label-manager
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;LabelManager provides label management and analysis capabilities.  
+| Core Interface/Class | Main Function |
+|------------|--------|
+|LabelService | Provides the function of adding, deleting, modifying and checking labels |
+|ResourceLabelService | Provides resource label management functions |
+|UserLabelService | Provides user label management functions |  
+The LabelManager architecture diagram is as follows:  
+![ResourceManager](./../../../../zh_CN/Images/Architecture/LinkisManager/ResourceManager-01.png)  
+### 4. Monitoring module linkis-manager-monitor
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Monitor provides the function of node status monitoring.
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md
new file mode 100644
index 0000000..cf1b2c9
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md
@@ -0,0 +1,132 @@
+## 1. Background
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ResourceManager (RM for short) is the computing resource management module of Linkis. All EngineConn (EC for short), EngineConnManager (ECM for short), and even external resources including Yarn are managed by RM. RM can manage resources based on users, ECM, or other granularities defined by complex tags.  
+## 2. The role of RM in Linkis
+![01](./../../../../zh_CN/Images/Architecture/rm-01.png)  
+![02](./../../../../zh_CN/Images/Architecture/rm-02.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As a part of Linkis Manager, RM mainly functions as follows: maintain the available resource information reported by ECM, process the resource application submitted by ECM, record the actual resource usage information reported by EC in real time during the life cycle after successful application, and provide query current resource usage The relevant interface of the situation.  
+In Linkis, other services that interact with RM mainly include:  
+1. Engine Manager, ECM for short: Processes the microservices that start the engine connector request. As a resource provider, ECM is responsible for registering and unregistering resources with RM. At the same time, as the manager of the engine, ECM is responsible for applying for resources from RM instead of the new engine connector that is about to start. For each ECM instance, there is a corresponding resource record in the RM, which contains information such as the total resources a [...]
+![03](./../../../../zh_CN/Images/Architecture/rm-03.png)  
+2. The engine connector, referred to as EC, is the actual execution unit of user operations. At the same time, as the actual user of the resource, the EC is responsible for reporting the actual use of the resource to the RM. Each EC has a corresponding resource record in the RM: during the startup process, it is reflected as a locked resource; during the running process, it is reflected as a used resource; after being terminated, the resource record is subsequently deleted.  
+![04](./../../../../zh_CN/Images/Architecture/rm-04.png)  
+## 3. Resource type and format
+![05](./../../../../zh_CN/Images/Architecture/rm-05.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As shown in the figure above, all resource classes implement a top-level Resource interface, which defines the calculation and comparison methods that all resource classes need to support, and overloads the corresponding mathematical operators to enable resources to be Directly calculated and compared like numbers.  
+| Operator | Correspondence Method | Operator | Correspondence Method |
+|--------|-------------|--------|-------------|
+| \+ | add | \> | moreThan |
+| \- | minus | \< | lessThan |
+| \* | multiply | = | equals |
+| / | divide | \>= | notLessThan |
+| \<= | notMoreThan | | |  
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The currently supported resource types are shown in the following table. All resources have corresponding json serialization and deserialization methods, which can be stored in json format and transmitted across the network:  
+
+| Resource Type | Description |
+|-----------------------|--------------------------------------------------------|
+| MemoryResource | Memory Resource |
+| CPUResource | CPU Resource |
+| LoadResource | Both memory and CPU resources |
+| YarnResource | Yarn queue resources (queue, queue memory, queue CPU, number of queue instances) |
+| LoadInstanceResource | Server resources (memory, CPU, number of instances) |
+| DriverAndYarnResource | Driver and executor resources (with server resources and Yarn queue resources at the same time) |
+| SpecialResource | Other custom resources |  
+
+## 4. Available resource management
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The available resources in the RM mainly come from two sources: the available resources reported by the ECM, and the resource limits configured according to tags in the Configuration module.  
+**ECM resource report**:  
+1. When the ECM is started, it will broadcast the ECM registration message. After receiving the message, the RM will register the resource according to the content contained in the message. The resource-related content includes:
+
+     1. Total resources: the total number of resources that the ECM can provide.
+
+     2. Protect resources: When the remaining resources are less than this resource, no further resources are allowed to be allocated.
+
+     3. Resource type: such as LoadResource, DriverAndYarnResource and other type names.
+
+     4. Instance information: machine name plus port name.
+
+2. After RM receives the resource registration request, it adds a record in the resource table, the content is consistent with the parameter information of the interface, and finds the label representing the ECM through the instance information, and adds an association in the resource and label association table recording.
+
+3. When the ECM is closed, it will broadcast a message that the ECM is closed. After receiving the message, the RM will go offline according to the ECM instance information in the message, that is, delete the resource and associated records corresponding to the ECM instance tag.  
+
+**Configuration模块标签资源配置**:  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;In the Configuration module, users can configure the number of resources based on different tag combinations, such as limiting the maximum available resources of the User/Creator/EngineType combination.
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The RM queries the Configuration module for resource information through the RPC message, using the combined tag as the query condition, and converts it into a Resource object to participate in subsequent comparison and recording.  
+
+## 5. Resource Usage Management  
+**Receive user's resource application:**  
+1. When LinkisManager receives a request to start EngineConn, it will call RM's resource application interface to apply for resources. The resource application interface accepts an optional time parameter. When the waiting time for applying for a resource exceeds the limit of the time parameter, the resource application will be automatically processed as a failure.  
+**Judging whether there are enough resources:**  
+That is, to determine whether the remaining available resources are greater than the requested resources, if greater than or equal to, the resources are sufficient; otherwise, the resources are insufficient.
+
+1. RM preprocesses the label information attached to the resource application, and filters, combines and converts the original labels according to the rules (such as combining the User/Creator label and EngineType label), which makes the subsequent resource judgment more granular flexible.
+
+2. Lock each converted label one by one, so that their corresponding resource records remain unchanged during the processing of resource applications.
+
+3. According to each label:
+
+    1. Query the corresponding resource record from the database through the Persistence module. If the record contains the remaining available resources, it is directly used for comparison.
+
+    2. If there is no direct remaining available resource record, it will be calculated by the formula of [Remaining Available Resource=Maximum Available Resource-Used Resource-Locked Resource-Protected Resource].
+
+    3. If there is no maximum available resource record, request the Configuration module to see if there is configured resource information, if so, use the formula for calculation, if not, skip the resource judgment for this tag.
+
+    4. If there is no resource record, skip the resource judgment for this tag.
+
+4. As long as one tag is judged to be insufficient in resources, the resource application will fail, and each tag will be unlocked one by one.
+
+5. Only when all tags are judged to be sufficient resources, can the resource application be successfully passed and proceed to the next step.  
+
+**lock by application of resources:**
+
+1. The number of resource request by generating a new record in the resource table, and associated with each tag.
+
+2. If there is a tag corresponding to the remaining available resource record, the corresponding number of the abatement.
+
+3. Generate a timed task, the lock checks whether these resources are actually used after a certain time, if the timeout is not used, it is mandatory recycling.
+
+4. unlock each tag.
+
+**report the actual use of resources:**
+
+1. EngineConn after the start, broadcast a resource utilization message. RM after receiving the message, check whether the label corresponding to the EngineConn lock resource record, and if not, an error.
+
+2. If you have locked resource, the EngineConn all labels associated lock.
+
+3. For each tag, the resource record corresponding lock record for the conversion of used resources.
+
+4. Unlock all labels.
+
+**Release actual used resources:**
+
+1. EngineConn after the end of the life cycle, recycling broadcast messages. RM after receiving the message, check whether the EngineConn corresponding label resources have been recorded.
+
+2. If so, all the labels associated EngineConn be locked.
+
+3, minus the amount used in the corresponding resource record for each label.
+
+4. If there is a tag corresponding to the remaining available resource record, the corresponding increase in number.
+
+5. The unlocking each tag
+
+## 6. External resource management
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;In RM, in order to classify resources more flexibly and expansively, support multi-cluster resource management and control, and at the same time make it easier to access new external resources, the following considerations have been made in the design:
+
+1. Unified management of resources through tags. After the resource is registered, it is associated with the tag, so that the attributes of the resource can be expanded infinitely. At the same time, resource applications are also tagged to achieve flexible matching.
+
+2. Abstract the cluster into one or more tags, and maintain the environmental information corresponding to each cluster tag in the external resource management module to achieve dynamic docking.
+
+3. Abstract a general external resource management module. If you need to access new external resource types, you can convert different types of resource information into Resource entities in the RM as long as you implement a fixed interface to achieve unified management.  
+![06](./../../../../zh_CN/Images/Architecture/rm-06.png)  
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Other modules of RM obtain external resource information through the interface provided by ExternalResourceService.
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The ExternalResourceService obtains information about external resources through resource types and tags:
+
+1. The type, label, configuration and other attributes of all external resources (such as cluster name, Yarn web
+     url, Hadoop version and other information) are maintained in the linkis\_external\_resource\_provider table.
+
+2. For each resource type, there is an implementation of the ExternalResourceProviderParser interface, which parses the attributes of external resources, converts the information that can be matched to the Label into the corresponding Label, and converts the information that can be used as a parameter to request the resource interface into params . Finally, an ExternalResourceProvider instance that can be used as a basis for querying external resource information is constructed.
+
+3. According to the resource type and label information in the parameters of the ExternalResourceService method, find the matching ExternalResourceProvider, generate an ExternalResourceRequest based on the information in it, and formally call the API provided by the external resource to initiate a resource information request.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/README.md
new file mode 100644
index 0000000..343b7b2
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Computation_Governance_Services/README.md
@@ -0,0 +1,40 @@
+## Background
+**The architecture of Linkis0.X mainly has the following problems**  
+1. The boundary between the core processing flow and the hierarchical module is blurred:  
+- Entrance and EngineManager function boundaries are blurred.
+
+- The main process of task submission and execution is not clear enough.
+
+- It is troublesome to extend the new engine, and it needs to implement the code of multiple modules.
+
+- Only support computing request scenarios, storage request scenarios and resident service mode (Cluster) are difficult to support.  
+2. Demands for richer and more powerful computing governance functions:  
+- Insufficient support for computing task management strategies.
+
+- The labeling capability is not strong enough, which restricts computing strategies and resource managemen.  
+
+The new architecture of Linkis1.0 computing governance service can solve these problems well.  
+## Architecture Diagram  
+![linkis Computation Gov](./../../../zh_CN/Images/Architecture/linkis-computation-gov-01.png)  
+**Operation process optimization:** Linkis1.0 will optimize the overall execution process of the job, from submission —\> preparation —\>
+Perform three stages to fully upgrade Linkis's Job execution architecture, as shown in the following figure:  
+![](./../../../zh_CN/Images/Architecture/linkis-computation-gov-02.png)  
+## Architecture Description
+### 1. Entrance
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Entrance, as the submission portal for computing tasks, provides task reception, scheduling and job information forwarding capabilities. It is a native capability split from Linkis0.X's Entrance.  
+[Entrance Architecture Design](./Entrance/Entrance.md)  
+### 2. Orchestrator
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Orchestrator, as the entrance to the preparation phase, inherits the capabilities of parsing Jobs, applying for Engines, and submitting execution from Entrance of Linkis0.X; at the same time, Orchestrator will provide powerful orchestration and computing strategy capabilities to meet multiple activities, active backups, transactions, and replays. , Current limiting, heterogeneous and mixed computing and other application scenarios.  
+[Enter Orchestrator Architecture Design](../Orchestrator/README.md)  
+### 3. LinkisManager
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As the management brain of Linkis, LinkisManager is mainly composed of AppManager, ResourceManager, LabelManager and EngineConnPlugin.  
+1. ResourceManager not only has Linkis0.X's resource management capabilities for Yarn and Linkis EngineManager, but also provides tag-based multi-level resource allocation and recycling capabilities, allowing ResourceManager to have full resource management capabilities across clusters and across computing resource types;
+2. AppManager will coordinate and manage all EngineConnManager and EngineConn. The life cycle of EngineConn application, reuse, creation, switching, and destruction will be handed over to AppManager for management; and LabelManager will provide cross-IDC and cross-cluster based on multi-level combined tags EngineConn and EngineConnManager routing and management capabilities;
+3. EngineConnPlugin is mainly used to reduce the access cost of new computing storage, so that users can access a new computing storage engine only by implementing one class.  
+ [Enter LinkisManager Architecture Design](./LinkisManager/README.md)  
+### 4. Engine Manager
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Engine conn Manager (ECM) is a simplified and upgraded version of linkis0. X engine manager. The ECM under linkis1.0 removes the application ability of the engine, and the whole microservice is completely stateless. It will focus on supporting the startup and destruction of all kinds of enginecon.  
+[Enter EngineConnManager Architecture Design](./EngineConnManager/README.md)  
+ ### 5. EngineConn
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConn is an optimized and upgraded version of Linkis0.X Engine. It will provide EngineConn and Executor two modules. EngineConn is used to connect the underlying computing storage engine and provide a session session that connects the underlying computing storage engines; Executor is based on this Session session , Provide full-stack computing support for interactive computing, streaming computing, offline computing, and data storage.  
+ [Enter EngineConn Architecture Design](./EngineConn/README.md)
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/DifferenceBetween1.0&0.x.md b/Linkis-Doc-master/en_US/Architecture_Documents/DifferenceBetween1.0&0.x.md
new file mode 100644
index 0000000..0965b0c
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/DifferenceBetween1.0&0.x.md
@@ -0,0 +1,50 @@
+## 1. Brief Description
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;First of all, the Entrance and EngineConnManager (formerly EngineManager) services under the Linkis1.0 architecture are completely unrelated to the engine. That is, under the Linkis1.0 architecture, each engine does not need to be implemented and started the corresponding Entrance and EngineConnManager, and Linkis1.0’s Each Entrance and EngineConnManager can be shared by all engines.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Secondly, Linkis1.0 added the Linkis-Manager service to provide external AppManager (application management), ResourceManager (resource management, the original ResourceManager service) and LabelManager (label management) capabilities.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Then, in order to reduce the difficulty of implementing and deploying a new engine, Linkis 1.0 re-architects a module called EngineConnPlugin. Each new engine only needs to implement the EngineConnPlugin interface.Linkis EngineConnPluginServer supports dynamic loading of EngineConnPlugin (new engine) in the form of a plug-in. Once EngineConnPluginServer is successfully loaded, EngineConnManager can quickly start an instance of the engine fo [...]
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Finally, all the microservices of Linkis are summarized and classified, which are generally divided into three major levels: public enhancement services, computing governance services and microservice governance services, from the code hierarchy, microservice naming and installation directory structure, etc. To standardize the microservice system of Linkis1.0.  
+##  2. Main Feature
+1. **Strengthen computing governance**, Linkis 1.0 mainly strengthens the comprehensive management and control capabilities of computing governance from engine management, label management, ECM management, and resource management. It is based on the powerful management and control design concept of labeling. This makes Linkis 1.0 a solid step towards multi-IDC, multi-cluster, and multi-container.  
+2. **Simplify user implementation of new engines**, EnginePlugin is used to integrate the related interfaces and classes that need to be implemented to implement a new engine, as well as the Entrance-EngineManager-Engine three-tier module system that needs to be split into one interface. , Simplify the process and code for users to implement the new engine, so that as long as one class is implemented, a new engine can be connected.  
+3. **Full-stack computing storage engine support**, to achieve full coverage support for computing request scenarios (such as Spark), storage request scenarios (such as HBase), and resident cluster services (such as SparkStreaming).  
+4. **Improved advanced computing strategy capability**, add Orchestrator to implement rich computing task management strategies, and support tag-based analysis and orchestration.  
+## 3. Service Comparison
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Please refer to the following two pictures:  
+![Linkis0.X Service List](./../Images/Architecture/Linkis0.X-services-list.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The list of Linkis1.0 microservices is as follows:  
+![Linkis1.0 Service List](./../Images/Architecture/Linkis1.0-services-list.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;From the above two figures, Linkis1.0 divides services into three types of services: Computing Governance (CG)/Micro Service Governance (MG)/Public Enhanced Service (PS). among them:  
+1. A major change in computing governance is that Entrance and EngineConnManager services are no longer related to engines. To implement a new engine, only the EngineConnPlugin plug-in needs to be implemented. EngineConnPluginServer will dynamically load the EngineConnPlugin plug-in to achieve engine hot-plug update;
+2. Another major change in computing governance is that LinkisManager, as the management brain of Linkis, abstracts and defines AppManager (application management), ResourceManager (resource management) and LabelManager (label management);
+3. Microservice management service, merged and unified the Eureka and Gateway services in the 0.X part, and enhanced the functions of the Gateway service to support routing and forwarding according to Label;
+4. Public enhancement services, mainly to optimize and unify the BML services/context services/data source services/public services of the 0.X part, which is convenient for everyone to manage and view.  
+## 4. Introduction To Linkis Manager
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As the management brain of Linkis, Linkis Manager is mainly composed of AppManager, ResourceManager and LabelManager.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ResourceManager not only has Linkis0.X's resource management capabilities for Yarn and Linkis EngineManager, but also provides tag-based multi-level resource allocation and recycling capabilities, allowing ResourceManager to have full resource management capabilities across clusters and across computing resource types.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;AppManager will coordinate and manage all EngineConnManager and EngineConn, and the life cycle of EngineConn application, reuse, creation, switching, and destruction will be handed over to AppManager for management.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The LabelManager will provide cross-IDC and cross-cluster EngineConn and EngineConnManager routing and management capabilities based on multi-level combined tags.  
+## 5. Introduction To Linkis EngineConnPlugin
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConnPlugin is mainly used to reduce the cost of access and deployment of new computing storage. It truly enables users to “just need to implement a class to connect to a new computing storage engine; just execute a script to quickly deploy a new engine ".  
+### 5.1 New Engine Implementation Comparison
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The following are the relevant interfaces and classes that the user Linkis0.X needs to implement to implement a new engine:  
+![Linkis0.X How to implement a brand new engine](./../Images/Architecture/Linkis0.X-NewEngine-architecture.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The following is Linkis 1.0.0, which implements a new engine, the interfaces and classes that users need to implement:  
+![Linkis1.0 How to implement a brand new engine](./../Images/Architecture/Linkis1.0-NewEngine-architecture.png)  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Among them, EngineConnResourceFactory and EngineLaunchBuilder are not required to implement interfaces, and only EngineConnFactory is required to implement interfaces.  
+### 5.2 New engine startup process
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConnPlugin provides the Server service to start and load all engine plug-ins. The following is a new engine startup that accesses the entire process of EngineConnPlugin-Server:  
+![Linkis Engine start process](./../Images/Architecture/Linkis1.0-newEngine-initialization.png)  
+## 6. Introduction To Linkis EngineConn
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConn, the original Engine module, is the actual unit for Linkis to connect and interact with the underlying computing storage engine, and is the basis for Linkis to provide computing and storage capabilities.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;EngineConn of Linkis1.0 is mainly composed of EngineConn and Executor. among them:  
+
+1. EngineConn is the connector, which contains the session information between the engine and the specific cluster. It only acts as a connection, a client, and does not actually perform calculations.  
+
+2. Executor is the executor. As a real computing scene executor, it is the actual computing logic execution unit, and it also abstracts various specific capabilities of the engine, such as providing various services such as locking, access status, and log acquisition.
+
+3. Executor is created by the session information in EngineConn. An engine type can support multiple different types of computing tasks, each corresponding to the implementation of an Executor, and the computing task will be submitted to the corresponding Executor for execution.  In this way, the same engine can provide different services according to different computing scenarios. For example, the permanent engine does not need to be locked after it is started, and the one-time engine d [...]
+
+4. The advantage of using the separation of Executor and EngineConn is that it can avoid the Receiver coupling business logic, and only retains the RPC communication function. Distribute services in multiple Executor modules, and abstract them into several categories of engines: interactive computing engines, streaming engines, disposable engines, etc., which may be used, and build a unified engine framework for later expansion.
+In this way, different types of engines can respectively load the required capabilities according to their needs, which greatly reduces the redundancy of engine implementation.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;As shown below:  
+![Linkis EngineConn Architecture diagram](./../Images/Architecture/Linkis1.0-EngineConn-architecture.png)
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/How_to_add_an_EngineConn.md b/Linkis-Doc-master/en_US/Architecture_Documents/How_to_add_an_EngineConn.md
new file mode 100644
index 0000000..c28635b
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/How_to_add_an_EngineConn.md
@@ -0,0 +1,105 @@
+# How to add an EngineConn
+
+Adding EngineConn is one of the core processes of the computing task preparation phase of Linkis computing governance. It mainly includes the following steps. First, client side (Entrance or user client) initiates a request for a new EngineConn to LinkisManager . Then LinkisManager initiates a request to EngineConnManager to start EngineConn based on demands and label rules. Finally,  LinkisManager returns the usable EngineConn to the client side.
+
+Based on the figure below, let's explain the whole process in detail:
+
+![Process of adding a EngineConn](../Images/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png)
+
+## 1. LinkisManger receives the requests from client side
+
+**Glossary:**
+
+- LinkisManager: The management center of Linkis computing governance capabilities. Its main responsibilities are:
+  1. Based on multi-level combined tags, provide users with available EngineConn after complex routing, resource management and load balancing.
+
+  2. Provide EC and ECM full life cycle management capabilities.
+
+  3. Provide users with multi-Yarn cluster resource management functions based on multi-level combined tags. It is mainly divided into three modules: AppManager, ResourceManager and LabelManager , which can support multi-active deployment and have the characteristics of high availability and easy expansion.
+
+After the AM module receives the Client’s new EngineConn request, it first checks the parameters of the request to determine the validity of the request parameters. Secondly, selects the most suitable EngineConnManager (ECM) through complex rules for use in the subsequent EngineConn startup. Next, it will apply to RM for the resources needed to start the EngineConn, Finally, it will request the ECM to create an EngineConn.
+
+The four steps will be described in detail below.
+
+### 1. Request parameter verification
+
+After the AM module receives the engine creation request, it will check the parameters. First, it will check the permissions of the requesting user and the creating user, and then check the Label attached to the request. Since in the subsequent creation process of AM, Label will be used to find ECM and perform resource information recording, etc, you need to ensure that you have the necessary Label. At this stage, you must bring the Label with UserCreatorLabel (For example: hadoop-IDE) a [...]
+
+### 2. Select  a EngineConnManager(ECM)
+
+ECM selection is mainly to complete the Label passed through the client to select a suitable ECM service to start EngineConn. In this step, first, the LabelManager will be used to search in the registered ECM through the Label passed by the client, and return in the order according to the label matching degree. After obtaining the registered ECM list, rules will be selected for these ECMs. At this stage, rules such as availability check, resource surplus, and machine load have been imple [...]
+
+### 3. Apply resources required for EngineConn
+
+1. After obtaining the assigned ECM, AM will then request how many resources will be used by the client's engine creation request by calling the EngineConnPluginServer service. Here, the resource request will be encapsulated, mainly including Label, the EngineConn startup parameters passed by the Client, and the user configuration parameters obtained from the Configuration module. The resource information is obtained by calling the ECP service through RPC.
+
+2. After the EngineConnPluginServer service receives the resource request, it will first find the corresponding engine tag through the passed tag, and select the EngineConnPlugin of the corresponding engine through the engine tag. Then use EngineConnPlugin's resource generator to calculate the engine startup parameters passed in by the client, calculate the resources required to apply for a new EngineConn this time, and then return it to LinkisManager. 
+
+   **Glossary:**
+
+- EgineConnPlugin: It is the interface that Linkis must implement when connecting a new computing storage engine. This interface mainly includes several capabilities that this EngineConn must provide during the startup process, including EngineConn resource generator, EngineConn startup command generator, EngineConn engine connection Device. Please refer to the Spark engine implementation class for the specific implementation: [SparkEngineConnPlugin](https://github.com/WeBankFinTech/Link [...]
+- EngineConnPluginServer: It is a microservice that loads all the EngineConnPlugins and provides externally the required resource generation capabilities of EngineConn and EngineConn's startup command generation capabilities.
+- EngineConnResourceFactory: Calculate the total resources needed when EngineConn starts this time through the parameters passed in.
+- EngineConnLaunchBuilder: Through the incoming parameters, a startup command of the EngineConn is generated to provide the ECM to start the engine.
+3. After AM obtains the engine resources, it will then call the RM service to apply for resources. The RM service will use the incoming Label, ECM, and the resources applied for this time to make resource judgments. First, it will judge whether the resources of the client corresponding to the Label are sufficient, and then judge whether the resources of the ECM service are sufficient, if the resources are sufficient, the resource application is approved this time, and the resources of th [...]
+
+### 4. Request ECM for engine creation
+
+1. After completing the resource application for the engine, AM will encapsulate the engine startup request, send it to the corresponding ECM via RPC for service startup, and obtain the instance object of EngineConn.
+2. AM will then determine whether EngineConn is successfully started and become available through the reported information of EngineConn. If it is, the result will be returned, and the process of adding an engine this time will end.
+
+## 2. ECM initiates EngineConn
+
+**Glossary:**
+
+- EngineConnManager: EngineConn's manager. Provides engine life-cycle management, and at the same time reports load information and its own health status to RM.
+- EngineConnBuildRequest: The start engine command passed by LinkisManager to ECM, which encapsulates all tag information, required resources and some parameter configuration information of the engine.
+- EngineConnLaunchRequest: Contains the BML materials, environment variables, ECM required local environment variables, startup commands and other information required to start an EngineConn, so that ECM can build a complete EngineConn startup script based on this.
+
+After ECM receives the EngineConnBuildRequest command passed by LinkisManager, it is mainly divided into three steps to start EngineConn: 
+
+1. Request EngineConnPluginServer to obtain EngineConnLaunchRequest encapsulated by EngineConnPluginServer. 
+2.  Parse EngineConnLaunchRequest and encapsulate it into EngineConn startup script.
+3.  Execute startup script to start EngineConn.
+
+### 2.1 EngineConnPluginServer encapsulates EngineConnLaunchRequest
+
+Get the EngineConn type and corresponding version that actually needs to be started through the label information of EngineConnBuildRequest, get the EngineConnPlugin of the EngineConn type from the memory of EngineConnPluginServer, and convert the EngineConnBuildRequest into EngineConnLaunchRequest through the EngineConnLaunchBuilder of the EngineConnPlugin.
+
+### 2.2 Encapsulate EngineConn startup script
+
+After the ECM obtains the EngineConnLaunchRequest, it downloads the BML materials in the EngineConnLaunchRequest to the local, and checks whether the local necessary environment variables required by the EngineConnLaunchRequest exist. After the verification is passed, the EngineConnLaunchRequest is encapsulated into an EngineConn startup script.
+
+### 2.3 Execute startup script
+
+Currently, ECM only supports Bash commands for Unix systems, that is, only supports Linux systems to execute the startup script.
+
+Before startup, the sudo command is used to switch to the corresponding requesting user to execute the script to ensure that the startup user (ie, JVM user) is the requesting user on the Client side.
+
+After the startup script is executed, ECM will monitor the execution status and execution log of the script in real time. Once the execution status returns to non-zero, it will immediately report EngineConn startup failure to LinkisManager and the entire process is complete; otherwise, it will keep monitoring the log and status of the startup script until The script execution is complete.
+
+## 3. EngineConn initialization
+
+After ECM executed EngineConn's startup script, EngineConn microservice was officially launched.
+
+**Glossary:**
+
+- EngineConn microservice: Refers to the actual microservices that include an EngineConn and one or more Executors to provide computing power for computing tasks. When we talk about adding an EngineConn, we actually mean adding an EngineConn microservice.
+- EngineConn: The engine connector is the actual connection unit with the underlying computing storage engine, and contains the session information with the actual engine. The difference between it and Executor is that EngineConn only acts as a connection and a client, and does not actually perform calculations. For example, SparkEngineConn, its session information is SparkSession.
+- Executor: As a real computing storage scenario executor, it is the actual computing storage logic execution unit. It abstracts the various capabilities of EngineConn and provides multiple different architectural capabilities such as interactive execution, subscription execution, and responsive execution.
+
+The initialization of EngineConn microservices is generally divided into three stages:
+
+1. Initialize the EngineConn of the specific engine. First use the command line parameters of the Java main method to encapsulate an EngineCreationContext that contains relevant label information, startup information, and parameter information, and initialize EngineConn through EngineCreationContext to complete the establishment of the connection between EngineConn and the underlying Engine, such as: SparkEngineConn will initialize one at this stage SparkSession is used to establish a co [...]
+2. Initialize the Executor. After the EngineConn is initialized, the corresponding Executor will be initialized according to the actual usage scenario to provide service capabilities for subsequent users. For example, the SparkEngineConn in the interactive computing scenario will initialize a series of Executors that can be used to submit and execute SQL, PySpark, and Scala code capabilities, and support the Client to submit and execute SQL, PySpark, Scala and other codes to the SparkEng [...]
+3. Report the heartbeat to LinkisManager regularly, and wait for EngineConn to exit. When the underlying engine corresponding to EngineConn is abnormal, or exceeds the maximum idle time, or Executor is executed, or the user manually kills, the EngineConn automatically ends and exits.
+
+----
+
+At this point, The process of how to add a new EngineConn is basically over. Finally, let's make a summary:
+
+- The client initiates a request for adding EngineConn to LinkisManager.
+- LinkisManager checks the legitimacy of the parameters, first selects the appropriate ECM according to the label, then confirms the resources required for this new EngineConn according to the user's request, applies for resources from the RM module of LinkisManager, and requires ECM to start a new EngineConn as required after the application is passed.
+- ECM first requests EngineConnPluginServer to obtain an EngineConnLaunchRequest containing BML materials, environment variables, ECM required local environment variables, startup commands and other information needed to start an EngineConn, and then encapsulates the startup script of EngineConn, and finally executes the startup script to start the EngineConn.
+- EngineConn initializes the EngineConn of a specific engine, and then initializes the corresponding Executor according to the actual usage scenario, and provides service capabilities for subsequent users. Finally, report the heartbeat to LinkisManager regularly, and wait for the normal end or termination by the user.
+
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Job_submission_preparation_and_execution_process.md b/Linkis-Doc-master/en_US/Architecture_Documents/Job_submission_preparation_and_execution_process.md
new file mode 100644
index 0000000..adb2628
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Job_submission_preparation_and_execution_process.md
@@ -0,0 +1,138 @@
+# Job submission, preparation and execution process
+
+The submission and execution of computing tasks (Job) is the core capability provided by Linkis. It almost colludes with all modules in the Linkis computing governance architecture and occupies a core position in Linkis.
+
+The whole process, starting at submitting user's computing tasks from the client and ending with returning final results, is divided into three stages: submission -> preparation -> executing. The details are shown in the following figure.
+
+![The overall flow chart of computing tasks](../Images/Architecture/Job_submission_preparation_and_execution_process/overall.png)
+
+Among them:
+
+- Entrance, as the entrance to the submission stage, provides task reception, scheduling and job information forwarding capabilities. It is the unified entrance for all computing tasks. It will forward computing tasks to Orchestrator for scheduling and execution.
+- Orchestrator, as the entrance to the preparation phase, mainly provides job analysis, orchestration and execution capabilities.
+- Linkis Manager: The management center of computing governance capabilities. Its main responsibilities are as follow:
+
+  1. ResourceManager:Not only has the resource management capabilities of Yarn and Linkis EngineConnManager, but also provides tag-based multi-level resource allocation and recovery capabilities, allowing ResourceManager to have full resource management capabilities across clusters and across computing resource types;
+  2. AppManager:  Coordinate and manage all EngineConnManager and EngineConn, including the life cycle of EngineConn application, reuse, creation, switching, and destruction to AppManager for management;
+  3. LabelManager: Based on multi-level combined labels, it will provide label support for the routing and management capabilities of EngineConn and EngineConnManager across IDC and across clusters;
+  4. EngineConnPluginServer: Externally provides the resource generation capabilities required to start an EngineConn and EngineConn startup command generation capabilities.
+- EngineConnManager: It is the manager of EngineConn, which provides engine life-cycle management, and at the same time reports load information and its own health status to RM.
+- EngineConn: It is the actual connector between Linkis and the underlying computing storage engines. All user computing and storage tasks will eventually be submitted to the underlying computing storage engine by EngineConn. According to different user scenarios, EngineConn provides full-stack computing capability framework support for interactive computing, streaming computing, off-line computing, and data storage tasks.
+
+## 1. Submission Stage
+
+The submission phase is mainly the interaction of Client -> Linkis Gateway -> Entrance, and the process is as follows:
+
+![Flow chart of submission phase](../Images/Architecture/Job_submission_preparation_and_execution_process/submission.png)
+
+1. First, the Client (such as the front end or the client) initiates a Job request, and the job request information is simplified as follows (for the specific usage of Linkis, please refer to [How to use Linkis](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/en_US/User_Manual/How_To_Use_Linkis.md)):
+```
+POST /api/rest_j/v1/entrance/submit
+```
+
+```json
+{
+    "executionContent": {"code": "show tables", "runType": "sql"},
+    "params": {"variable": {}, "configuration": {}},  //非必须
+    "source": {"scriptPath": "file:///1.hql"}, //非必须,仅用于记录代码来源
+    "labels": {
+        "engineType": "spark-2.4.3",  //指定引擎
+        "userCreator": "johnnwnag-IDE"  // 指定提交用户和提交系统
+    }
+}
+```
+
+2. After Linkis-Gateway receives the request, according to the serviceName in the URI ``/api/rest_j/v1/${serviceName}/.+``, it will confirm the microservice name for routing and forwarding. Here Linkis-Gateway will parse out the  name as entrance and  Job is forwarded to the Entrance microservice. It should be noted that if the user specifies a routing label, the Entrance microservice instance with the corresponding label will be selected for forwarding according to the routing label ins [...]
+3. After Entrance receives the Job request, it will first simply verify the legitimacy of the request, then use RPC to call JobHistory to persist the job information, and then encapsulate the Job request as a computing task, put it in the scheduling queue, and wait for it to be consumed by consumption thread.
+4. The scheduling queue will open up a consumption queue and a consumption thread for each group. The consumption queue is used to store the user computing tasks that have been preliminarily encapsulated. The consumption thread will continue to take computing tasks from the consumption queue for consumption in a FIFO manner. The current default grouping method is Creator + User (that is, submission system + user). Therefore, even if it is the same user, as long as it is a computing task  [...]
+5. After the consuming thread takes out the calculation task, it will submit the calculation task to Orchestrator, which officially enters the preparation phase.
+
+## 2. Preparation Stage
+
+There are two main processes in the preparation phase. One is to apply for an available EngineConn from LinkisManager to submit and execute the following computing tasks. The other is Orchestrator to orchestrate the computing tasks submitted by Entrance, and to convert a user's computing request into a physical execution tree and handed over to the execution phase where a computing task actually being executed. 
+
+#### 2.1 Apply to LinkisManager for available EngineConn
+
+If the user has a reusable EngineConn in LinkisManager, the EngineConn is directly locked and returned to Orchestrator, and the entire application process ends.
+
+How to define a reusable EngineConn? It refers to those that can match all the label requirements of the computing task, and the EngineConn's own health status is Healthy (the load is low and the actual status is Idle). Then, all the EngineConn that meets the conditions are sorted and selected according to the rules, and finally the best one is locked.
+
+If the user does not have a reusable EngineConn, a process to request a new EngineConn will be triggered at this time. Regarding the process, please refer to: [How to add an EngineConn](How_to_add_an_EngineConn.md).
+
+#### 2.2 Orchestrate a computing task
+
+Orchestrator is mainly responsible for arranging a computing task (JobReq) into a physical execution tree (PhysicalTree) that can be actually executed, and providing the execution capabilities of the Physical tree.
+
+Here we first focus on Orchestrator's computing task scheduling capabilities. A flow chart is shown below:
+
+![Orchestration flow chart](../Images/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png)
+
+The main process is as follows:
+
+- Converter: Complete the conversion of the JobReq (task request) submitted by the user to Orchestrator's ASTJob. This step will perform parameter check and information supplementation on the calculation task submitted by the user, such as variable replacement, etc.
+- Parser: Complete the analysis of ASTJob. Split ASTJob into an AST tree composed of ASTJob and ASTStage.
+- Validator: Complete the inspection and information supplement of ASTJob and ASTStage, such as code inspection, necessary Label information supplement, etc.
+- Planner: Convert an AST tree into a Logical tree. The Logical tree at this time has been composed of LogicalTask, which contains all the execution logic of the entire computing task.
+- Optimizer: Convert a Logical tree to a Physica tree and optimize the Physical tree.
+
+In a physical tree, the majority of nodes are computing strategy logic. Only the middle ExecTask truly encapsulates the execution logic which will be further submitted to and executed at EngineConn. As shown below:
+
+![Physical Tree](../Images/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png)
+
+Different computing strategies have different execution logics encapsulated by JobExecTask and StageExecTask in the Physical tree.
+
+The execution logic encapsulated by JobExecTask and StageExecTask in the Physical tree depends on the  specific type of computing strategy.
+
+For example, under the multi-active computing strategy, for a computing task submitted by a user, the execution logic submitted to EngineConn of different clusters for execution is encapsulated in two ExecTasks, and the related strategy logic is reflected in the parent node (StageExecTask(End)) of the two ExecTasks.
+
+Here, we take the multi-reading scenario under the multi-active computing strategy as an example.
+
+In multi-reading scenario, only one result of ExecTask is required to return. Once the result is returned , the Physical tree can be marked as successful. However, the Physical tree only has the ability to execute sequentially according to dependencies, and cannot terminate the execution of each node. Once a node is canceled or fails to execute, the entire Physical tree will be marked as failure. At this time, StageExecTask (End) is needed to ensure that the Physical tree can not only ca [...]
+
+The orchestration process of Linkis Orchestrator is similar to many SQL parsing engines (such as Spark, Hive's SQL parser). But in fact, the orchestration capability of Linkis Orchestrator is realized based on the computing governance field for the different computing governance needs of users. The SQL parsing engine is a parsing orchestration oriented to the SQL language. Here is a simple distinction:
+
+1. What Linkis Orchestrator mainly wants to solve is the orchestration requirements caused by different computing tasks for computing strategies. For example, in order to be multi-active, Orchestrator will submit a calculation task for the user, based on the "multi-active" computing strategy requirements, compile a physical tree, so as to submit to multiple clusters to perform this calculation task. And in the process of constructing the entire Physical tree, various possible abnormal sc [...]
+2. The orchestration ability of Linkis Orchestrator has nothing to do with the programming language. In theory, as long as an engine has adapted to Linkis, all the programming languages it supports can be orchestrated, while the SQL parsing engine only cares about the analysis and execution of SQL, and is only responsible for parsing a piece of SQL into one executable Physical tree, and finally calculate the result.
+3. Linkis Orchestrator also has the ability to parse SQL, but SQL parsing is just one of Orchestrator Parser's analytic implementations for the SQL programming language. The Parser of Linkis Orchestrator also considers introducing Apache Calcite to parse SQL. It supports splitting a user SQL that spans multiple computing engines (must be a computing engine that Linkis has docked) into multiple sub SQLs and submitting them to each corresponding engine during the execution phase. Finally,  [...]
+
+Please refer to [Orchestrator Architecture Design](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/en_US/Architecture_Documents/Orchestrator/Orchestrator_architecture_doc.md) for more details. 
+
+After the analysis and arrangement of Linkis Orchestrator, the  computing task has been transformed into a executable physical tree. Orchestrator will submit the Physical tree to Orchestrator's Execution module and enter the final execution stage.
+
+## 3. Execution Stage
+
+The execution stage is mainly divided into the following two steps, these two steps are the last two phases of capabilities provided by Linkis Orchestrator:
+
+![Flow chart of the execution stage](../Images/Architecture/Job_submission_preparation_and_execution_process/execution.png)
+
+The main process is as follows:
+
+- Execution: Analyze the dependencies of the Physical tree, and execute them sequentially from the leaf nodes according to the dependencies.
+- Reheater: Once the execution of a node in the Physical tree is completed, it will trigger a reheat. Reheating allows the physical tree to be dynamically adjusted according to the real-time execution.For example: it is detected that a leaf node fails to execute, and it supports retry (if it is caused by throwing ReTryExecption), the Physical tree will be automatically adjusted, and a retry parent node with exactly the same content is added to the leaf node .
+
+Let us go back to the Execution stage, where we focus on the execution logic of the ExecTask node that encapsulates the user computing task submitted to EngineConn.
+
+1. As mentioned earlier, the first step in the preparation phase is to obtain a usable EngineConn from LinkisManager. After ExecTask gets this EngineConn, it will submit the user's computing task to EngineConn through an RPC request.
+2. After EngineConn receives the computing task, it will asynchronously submit it to the underlying computing storage engine through the thread pool, and then immediately return an execution ID.
+3. After ExecTask gets this execution ID, it can then use the this ID to asynchronously pull the execution status of the computing task (such as: status, progress, log, result set, etc.).
+4. At the same time, EngineConn will monitor the execution of the underlying computing storage engine in real time through multiple registered Listeners. If the computing storage engine does not support registering Listeners, EngineConn will start a daemon thread for the computing task and periodically pull the execution status from the computing storage engine.
+5. EngineConn will pull the execution status back to the microservice where Orchestrator is located in real time through RCP request.
+6. After the Receiver of the microservice receives the execution status, it will broadcast it through the ListenerBus, and the Orchestrator Execution will consume the event and dynamically update the execution status of the Physical tree.
+7. The result set generated by the calculation task will be written to storage media such as HDFS at the EngineConn side. EngineConn returns only the result set path through RPC, Execution consumes the event, and broadcasts the obtained result set path through ListenerBus, so that the Listener registered by Entrance with Orchestrator can consume the result set path and write the result set path Persist to JobHistory.
+8. After the execution of the computing task on the EngineConn side is completed, through the same logic, the Execution will be triggered to update the state of the ExecTask node of the Physical tree, so that the Physical tree will continue to execute until the entire tree is completely executed. At this time, Execution will broadcast the completion status of the calculation task through ListenerBus.
+9. After the Entrance registered Listener with the Orchestrator consumes the state event, it updates the job state to JobHistory, and the entire task execution is completed.
+
+----
+
+Finally, let's take a look at how the client side knows the state of the calculation task and obtains the calculation result in time, as shown in the following figure:
+
+![Results acquisition process](../Images/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png)
+
+The specific process is as follows:
+
+1. The client periodically polls to request Entrance to obtain the status of the computing task.
+2. Once the status is flipped to success, it sends a request for job information to JobHistory, and gets all the result set paths.
+3. Initiate a query file content request to PublicService through the result set path, and obtain the content of the result set.
+
+Since then, the entire process of  job submission -> preparation -> execution have been completed.
+
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/Gateway.md b/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/Gateway.md
new file mode 100644
index 0000000..02c1db2
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/Gateway.md
@@ -0,0 +1,34 @@
+## Gateway Architecture Design
+
+#### Brief
+The Gateway is the primary entry point for Linkis to accept client and external requests, such as receiving job execution requests, and then forwarding the execution requests to specific eligible Entrance services.
+The bottom layer of the entire architecture is implemented based on "SpringCloudGateway". The upper layer is superimposed with module designs related to Http request parsing, session permissions, label routing and WebSocket multiplex forwarding. The overall architecture can be seen as follows.
+### Architecture Diagram
+
+![Gateway diagram of overall architecture](../../Images/Architecture/Gateway/gateway_server_global.png)
+
+#### Architecture Introduction
+- gateway-core: Gateway's core interface definition module, mainly defines the "GatewayParser" and "GatewayRouter" interfaces, corresponding to request parsing and routing according to the request; at the same time, it also provides the permission verification tool class named "SecurityFilter".
+- spring-cloud-gateway: This module integrates all dependencies related to "SpringCloudGateway", process and forward requests of the HTTP and WebSocket protocol types respectively.
+- gateway-server-support: The driver module of Gateway, relies on the spring-cloud-gateway module to implement "GatewayParser" and "GatewayRouter" respectively, among which "DefaultLabelGatewayRouter" provides the function of label routing.
+- gateway-httpclient-support: Provides a client-side generic class for Http to access Gateway services, which can be implemented based on more.
+- instance-label: External instance label module, providing service interface named "InsLabelService" which used to create routing labels and associate with application instances.
+
+The detailed design involved is as follows:
+
+#### 1、Request Routing And Forwarding (With Label Information)
+First, after the dispatcher of "SpringCloudGateway", the request enters the filter list of the gateway, and then enters the two main logic of "GatewayAuthorizationFilter" and "SpringCloudGatewayWebsocketFilter". 
+The filter integrates "DefaultGatewayParser" and "DefaultGatewayRouter" classes. From Parser to Router, execute the corresponding parse and route methods. 
+"DefaultGatewayParser" and "DefaultGatewayRouter" classes also contain custom Parser and Router, which are executed in the order of priority.
+Finally, the service instance selected by the "DefaultGatewayRouter" is handed over to the upper layer for forwarding.
+Now, we take the job execution request forwarding with label information as an example, and draw the following flowchart:  
+![Gateway Request Routing](../../Images/Architecture/Gateway/gateway_server_dispatcher.png)
+
+
+#### 2、WebSocket Connection Forwarding Management
+By default, "Spring Cloud Gateway" only routes and forwards WebSocket request once, and cannot perform dynamic switching. 
+But under the Linkis's gateway architecture, each information interaction will be accompanied by a corresponding uri address to guide routing to different backend services.
+In addition to the "WebSocketService" which is responsible for connecting with the front-end and the client, 
+and the "WebSocketClient" which is responsible for connecting with the backend service, a series of "GatewayWebSocketSessionConnection" lists are cached in the middle.
+A "GatewayWebSocketSessionConnection" represents the connection between a session and multiple backend service instances.  
+![Gateway WebSocket Forwarding](../../Images/Architecture/Gateway/gatway_websocket.png)
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/README.md
new file mode 100644
index 0000000..9dc4f83
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Microservice_Governance_Services/README.md
@@ -0,0 +1,32 @@
+## **Background**
+
+Microservice governance includes three main microservices: Gateway, Eureka and Open Feign.
+It is used to solve Linkis's service discovery and registration, unified gateway, request forwarding, inter-service communication, load balancing and other issues. 
+At the same time, Linkis 1.0 will also provide the supporting for Nacos; the entire Linkis is a complete microservice architecture and each business progress requires multiple microservices to complete.
+
+## **Architecture diagram**
+
+![](../../Images/Architecture/linkis-microservice-gov-01.png)
+
+## **Architecture Introduction**
+
+1. Linkis Gateway  
+As the gateway entrance of Linkis, Linkis Gateway is mainly responsible for request forwarding, user access authentication and WebSocket communication. 
+The Gateway of Linkis 1.0 also added Label-based routing and forwarding capabilities. 
+A WebSocket routing and forwarder is implemented by Spring Cloud Gateway in Linkis, it is used to establish a WebSocket connection with the client.
+After the connection is established, it will automatically analyze the client's WebSocket request and determine which backend microservice the request should be forward to through the rules, 
+then the request is forwarded to the corresponding backend microservice instance.  
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[Linkis Gateway](Gateway.md)
+
+2. Linkis Eureka  
+Mainly responsible for service registration and discovery. Eureka consists of multiple instances(service instances). These service instances can be divided into two types: Eureka Server and Eureka Client. 
+For ease of understanding, we divide Eureka Client into Service Provider and Service Consumer. Eureka Server provides service registration and discovery. 
+The Service Provider registers its own service with Eureka, so that service consumers can find it.
+The Service Consumer obtains a listed of registered services from Eureka, so that they can consume services.
+
+3. Linkis has implemented a set of its own underlying RPC communication schema based on Feign. As the underlying communication solution, Linkis RPC integrates the SDK into the microservices in need. 
+A microservice can be both the request caller and the request receiver.
+As the request caller, the Receiver of the target microservice will be requested through the Sender.
+As the request receiver, the Receiver will be provided to process the request sent by the Sender in order to complete the synchronous response or asynchronous response.
+   
+   ![](../../Images/Architecture/linkis-microservice-gov-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/BML.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/BML.md
new file mode 100644
index 0000000..69e671d
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/BML.md
@@ -0,0 +1,93 @@
+## Background
+
+BML (Material Library Service) is a material management system of linkis, which is mainly used to store various file data of users, including user scripts, resource files, third-party Jar packages, etc., and can also store class libraries that need to be used when the engine is running.
+
+It has the following functions:
+
+1) Support various types of files. Supports text and binary files. If you are a user in the field of big data, you can store their script files and material compression packages in the system.
+
+2), the service is stateless, multi-instance deployment, to achieve high service availability. When the system is deployed, it can be deployed with multiple instances. Each instance provides services independently to the outside world without interfering with each other. All information is stored in the database for sharing.
+
+3) Various ways of use. Provides two ways of Rest interface and SDK, users can choose according to their needs.
+
+4) The file is appended to avoid too many small HDFS files. Many small HDFS files will lead to a decrease in the overall performance of HDFS. We have adopted a file append method to combine multiple versions of resource files into one large file, effectively reducing the number of files in HDFS.
+
+5) Accurate authority control, safe storage of user resource file content. Resource files often have important content, and users only want to read it by themselves
+
+6) Provide life cycle management of file upload, update, download and other operational tasks.
+
+## Architecture diagram
+
+![BML Architecture Diagram](../../Images/Architecture/bml-02.png)
+
+## Schema description
+
+1. The Service layer includes resource management, uploading resources, downloading resources, sharing resources, and project resource management.
+
+Resource management is responsible for basic operations such as adding, deleting, modifying, and checking resources, controlling access rights, and whether files are out of date.
+
+2. File version control
+   Each BML resource file has version information. Each update operation of the same resource will generate a new version. Of course, it also supports historical version query and download operations. BML uses the version information table to record the deviation position and size of each version of the resource file HDFS storage, and can store multiple versions of data on one HDFS file.
+
+3. Resource file storage
+   HDFS files are mainly used as actual data storage. HDFS files can effectively ensure that the material library files are not lost. The files are appended to avoid too many small HDFS files.
+
+### Core Process
+
+**upload files:**
+
+1. Determine the operation type of the file uploaded by the user, whether it is the first upload or update upload. If it is the first upload, a new resource information record needs to be added. The system has generated a globally uniquely identified resource_id and a resource_location for this resource. The first version A1 of resource A needs to be stored in the resource_location location in the HDFS file system. After storing, you can get the first version marked as V00001. If it is a [...]
+
+2. Upload the file stream to the specified HDFS file. If it is an update, it will be added to the end of the last content by file appending.
+
+3. Add a new version record, each upload will generate a new version record. In addition to recording the metadata information of this version, the most important thing is to record the storage location of the version of the file, including the file path, start location, and end location.
+
+**download file:**
+
+1. When users download resources, they need to specify two parameters: one is resource_id and the other is version. If version is not specified, the latest version will be downloaded by default.
+
+2. After the user passes in the two parameters resource_id and version to the system, the system queries the resource_version table, finds the corresponding resource_location, start_byte and end\_byte to download, and uses the skipByte method of stream processing to set the front (start_byte- 1) skip bytes, then read to end_byte
+   The number of bytes. After the reading is successful, the stream information is returned to the user.
+
+3. Insert a successful download record in resource_download_history
+
+## Database Design
+
+1. Resource information table (resource)
+
+| Field name | Function | Remarks |
+|-------------------|------------------------------|----------------------------------|
+| resource_id | A string that uniquely identifies a resource globally | UUID can be used for identification |
+| resource_location | The location where resources are stored | For example, hdfs:///tmp/bdp/\${USERNAME}/ |
+| owner | The owner of the resource | e.g. zhangsan |
+| create_time | Record creation time | |
+| is_share | Whether to share | 0 means not to share, 1 means to share |
+| update\_time | Last update time of the resource | |
+| is\_expire | Whether the record resource expires | |
+| expire_time | Record resource expiration time | |
+
+2. Resource version information table (resource_version)
+
+| Field name | Function | Remarks |
+|-------------------|--------------------|----------|
+| resource_id | Uniquely identifies the resource | Joint primary key |
+| version | The version of the resource file | |
+| start_byte | Start byte count of resource file | |
+| end\_byte | End bytes of resource file | |
+| size | Resource file size | |
+| resource_location | Resource file placement location | |
+| start_time | Record upload start time | |
+| end\_time | End time of record upload | |
+| updater | Record update user | |
+
+3. Resource download history table (resource_download_history)
+
+| Field | Function | Remarks |
+|-------------|---------------------------|--------------------------------|
+| resource_id | Record the resource_id of the downloaded resource | |
+| version | Record the version of the downloaded resource | |
+| downloader | Record downloaded users | |
+| start\_time | Record download time | |
+| end\_time | Record end time | |
+| status | Whether the record is successful | 0 means success, 1 means failure |
+| err\_msg | Log failure reason | null means success, otherwise log failure reason |
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md
new file mode 100644
index 0000000..71d83d3
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md
@@ -0,0 +1,95 @@
+## **CSCache Architecture**
+### **issues that need resolving**
+
+### 1.1. Memory structure needs to be solved:
+
+1. Support splitting by ContextType: speed up storage and query performance
+
+2. Support splitting according to different ContextID: Need to complete ContextID, see metadata isolation
+
+3. Support LRU: Recycle according to specific algorithm
+
+4. Support searching by keywords: Support indexing by keywords
+
+5. Support indexing: support indexing directly through ContextKey
+
+6. Support traversal: need to support traversal according to ContextID and ContextType
+
+### 1.2 Loading and parsing problems to be solved:
+
+1. Support parsing ContextValue into memory data structure: It is necessary to complete the parsing of ContextKey and value to find the corresponding keywords.
+
+2. Need to interface with the Persistence module to complete the loading and analysis of the ContextID content
+
+### 1.3 Metric and cleaning mechanism need to solve the problem:
+
+1. When JVM memory is not enough, it can be cleaned based on memory usage and frequency of use
+
+2. Support statistics on the memory usage of each ContextID
+
+3. Support statistics on the frequency of use of each ContextID
+
+## **ContextCache Architecture**
+
+The architecture of ContextCache is shown in the following figure:
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png)
+
+1. ContextService: complete the provision of external interfaces, including additions, deletions, and changes;
+
+2. Cache: complete the storage of context information, map storage through ContextKey and ContextValue
+
+3. Index: The established keyword index, which stores the mapping between the keywords of the context information and the ContextKey;
+
+4. Parser: complete the keyword analysis of the context information;
+
+5. LoadModule completes the loading of information from the persistence layer when the ContextCache does not have the corresponding ContextID information;
+
+6. AutoClear: When the Jvm memory is insufficient, complete the on-demand cleaning of ContextCache;
+
+7. Listener: Metric information for the mobile phone ContextCache, such as memory usage and access times.
+
+## **ContextCache storage structure design**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png)
+
+The storage structure of ContextCache is divided into three layers:
+
+**ContextCache:** stores the mapping relationship between ContextID and ContextIDValue, and can complete the recovery of ContextID according to the LRU algorithm;
+
+**ContextIDValue:** CSKeyValueContext that has stored all context information and indexes of ContextID. And count the memory and usage records of ContestID.
+
+**CSKeyValueContext:** Contains the CSInvertedIndexSet index set that stores and supports keywords according to type, and also contains the storage set CSKeyValueMapSet that stores ContextKey and ContextValue.
+
+CSInvertedIndexSet: categorize and store keyword indexes through CSType
+
+CSKeyValueMapSet: categorize and store context information through CSType
+
+## **ContextCache UML Class Diagram Design**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png)
+
+## **ContextCache Timing Diagram**
+
+The following figure draws the overall process of using ContextID, KeyWord, and ContextType to check the corresponding ContextKeyValue in ContextCache.
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png)
+
+Note: The ContextIDValueGenerator will go to the persistence layer to pull the Array[ContextKeyValue] of the ContextID, and parse the ContextKeyValue key storage index and content through ContextKeyValueParser.
+
+The other interface processes provided by ContextCacheService are similar, so I won't repeat them here.
+
+## **KeyWord parsing logic**
+
+The specific entity bean of ContextValue needs to use the annotation \@keywordMethod on the corresponding get method that can be used as the keyword. For example, the getTableName method of Table must be annotated with \@keywordMethod.
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png)
+
+When ContextKeyValueParser parses ContextKeyValue, it scans all the annotations modified by KeywordMethod of the specific object passed in and calls the get method to obtain the returned object toString, which will be parsed through user-selectable rules and stored in the keyword collection. Rules have separators, and regular expressions
+
+Precautions:
+
+1. The annotation will be defined to the core module of cs
+
+2. The modified Get method cannot take parameters
+
+3. The toSting method of the return object of the Get method must return the keyword
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md
new file mode 100644
index 0000000..058f9ba
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md
@@ -0,0 +1,61 @@
+## **CSClient design ideas and implementation**
+
+
+CSClient is a client that interacts with each microservice and CSServer group. CSClient needs to meet the following functions.
+
+1. The ability of microservices to apply for a context object from cs-server
+
+2. The ability of microservices to register context information with cs-server
+
+3. The ability of microservices to update context information to cs-server
+
+4. The ability of microservices to obtain context information from cs-server
+
+5. Certain special microservices can sniff operations that have modified context information in cs-server
+
+6. CSClient can give clear instructions when the csserver cluster fails
+
+7. CSClient needs to provide a copy of all the context information of csid1 as a new csid2 for scheduling execution
+
+> The overall approach is to send http requests through the linkis-httpclient that comes with linkis, and send requests and receive responses by implementing various Action and Result entity classes.
+
+### 1. The ability to apply for context objects
+
+To apply for a context object, for example, if a user creates a new workflow on the front end, dss-server needs to apply for a context object from dss-server. When applying for a context object, the identification information (project name, workflow name) of the workflow needs to be passed through CSClient Send it to the CSServer (the gateway should be sent to one randomly at this time, because no csid information is carried at this time), once the application context returns the correct [...]
+
+### 2. Ability to register contextual information
+
+> The ability to register context, for example, the user uploads a resource file on the front-end page, uploads the file content to dss-server, dss-server stores the content in bml, and then needs to register the resourceid and version obtained from bml to cs-server In this case, you need to use the ability of csclient to register. The ability to register is to pass in csid and cskey
+> Register with csvalue (resourceid and version).
+
+### 3. Ability to update registered context
+
+> The ability to update contextual information. For example, if a user uploads a resource file test.jar, csserver already has registered information. If the user updates the resource file when editing the workflow, then cs-server needs to update this content Update. At this time, you need to call the updated interface of csclient
+
+### 4. The ability to get context
+
+The context information registered to csserver needs to be read when variable replacement, resource file download, and downstream nodes call upstream nodes to generate information. For example, when the engine side executes code, it needs to download bml resources. When you need to interact with csclient and csserver, get the resourceid and version of the file in bml and then download it.
+
+### 5. Certain special microservices can sniff operations that have modified context information in cs-server
+
+This operation is based on the following example. For example, a widget node has a strong linkage with the upstream sql node. The user writes a sql in the sql node, and the metadata of the sql result set is a, b, and c. Field, the widget node behind is bound to this sql, you can edit these three fields on the page, and then the user changes the sql statement, the metadata becomes a, b, c, d four fields, this When the user needs to refresh manually. We hope that if the script is changed,  [...]
+
+### 6. CSClient needs to provide a copy of all context information of csid1 as a new csid2 for scheduling execution
+
+Once the user publishes a project, he hopes to tag all the information of the project similar to git. The resource files and custom variables here will not change anymore, but there are some dynamic information, such as the result set generated. The content of csid will still be updated. So csclient needs to provide an interface for csid1 to copy all context information for microservices to call
+
+## **Implementation of ClientListener Module**
+
+For a client, sometimes you want to know that a certain csid and cskey have changed in the cs-server as soon as possible. For example, the csclient of visualis needs to be able to know that the previous sql node has changed, then it needs to be notified , The server has a listener module, and the client also needs a listener module. For example, a client wants to be able to monitor the changes of a certain cskey of a certain csid, then he needs to register the cskey to the callbackEngine [...]
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png)
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png)
+
+## **Implementation of GatewayRouter**
+
+
+The Gateway plug-in implements Context forwarding. The forwarding logic of the Gateway plug-in is carried out through the GatewayRouter. It needs to be divided into two ways. The first is to apply for a context object. At this time, the information carried by the CSClient does not contain csid. For the information, the judgment logic at this time should be through the registration information of eureka, and the first request sent will randomly enter a microservice instance.
+The second case is that the content of the ContextID is carried. We need to parse the csid. The way of parsing is to obtain the information of each instance through the method of string cutting, and then use eureka to determine whether this micro-channel still exists through the instance information. Service, if it exists, send it to this microservice instance
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md
new file mode 100644
index 0000000..76c85c3
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md
@@ -0,0 +1,86 @@
+## **CS HA Architecture Design**
+
+### 1, CS HA architecture summary
+
+#### (1) CS HA architecture diagram
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png)
+
+#### (2) Problems to be solved
+
+-HA of Context instance object
+
+-Client generates CSID request when creating workflow
+
+-List of aliases of CS Server
+
+-Unified CSID generation and analysis rules
+
+#### (3) Main design ideas
+
+① Load balancing
+
+When the client creates a new workflow, it randomly requests the HA module of a certain server to generate a new HAID with equal probability. The HAID information includes the main server information (hereinafter referred to as the main instance), and the candidate instance, where the candidate instance is The instance with the lowest load among the remaining servers, and a corresponding ContextID. The generated HAID is bound to the workflow and is persisted to the database, and then all [...]
+
+②High availability
+
+In subsequent operations, when the client or gateway determines that the main instance is unavailable, the operation request is forwarded to the standby instance for processing, thereby achieving high service availability. The HA module of the standby instance will first verify the validity of the request based on the HAID information.
+
+③Alias ​​mechanism
+
+The alias mechanism is adopted for the machine, the Instance information contained in the HAID adopts a custom alias, and the alias mapping queue is maintained in the background. It is that the client uses HAID when interacting with other components in the background, and uses ContextID when interacting with other components in the background. When implementing specific operations, a dynamic proxy mechanism is used to convert HAID to ContextID for processing.
+
+### 2, module design
+
+#### (1) Module diagram
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png)
+
+#### (2) Specific modules
+
+①ContextHAManager module
+
+Provide interface for CS Server to call to generate CSID and HAID, and provide alias conversion interface based on dynamic proxy;
+
+Call the persistence module interface to persist CSID information;
+
+②AbstractContextHAManager module
+
+The abstraction of ContextHAManager can support the realization of multiple ContextHAManager;
+
+③InstanceAliasManager module
+
+RPC module provides Instance and alias conversion interface, maintains alias mapping queue, and provides alias and CS
+Server instance query; provide an interface to verify whether the host is valid;
+
+④HAContextIDGenerator module
+
+Generate a new HAID and encapsulate it into the client's agreed format and return it to the client. The HAID structure is as follows:
+
+\${length of first instance}\${length of second instance}{instance alias 1} {instance alias 2} {actual ID}, the actual ID is set as ContextID
+Key;
+
+⑤ContextHAChecker module
+
+Provide HAID verification interface. Each request received will verify whether the ID format is valid, and whether the current host is the primary instance or the secondary instance: if it is the primary instance, the verification is passed; if it is the secondary instance, verify whether the primary instance is invalid and the primary instance is invalid The verification is passed.
+
+⑥BackupInstanceGenerator module
+
+Generate a backup instance and attach it to the CSID information;
+
+⑦MultiTenantBackupInstanceGenerator interface
+
+(Reserved interface, not implemented yet)
+
+### 3. UML Class Diagram
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png)
+
+### 4. HA module operation sequence diagram
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png)
+
+CSID generated for the first time:
+The client sends a request, and the Gateway forwards it to any server. The HA module generates the HAID, including the main instance, the backup instance and the CSID, and completes the binding of the workflow and the HAID.
+
+When the client sends a change request, Gateway determines that the main Instance is invalid, and then forwards the request to the standby Instance for processing. After the instance on the standby Instance verifies that the HAID is valid, it loads the instance and processes the request.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md
new file mode 100644
index 0000000..933d384
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md
@@ -0,0 +1,33 @@
+## **Listener Architecture**
+
+In DSS, when a node changes its metadata information, the context information of the entire workflow changes. We expect all nodes to perceive the change and automatically update the metadata. We use the monitoring mode to achieve, and use the heartbeat mechanism to poll to maintain the metadata consistency of the context information.
+
+### **Client registration itself, CSKey registration and CSKey update process**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png)
+
+The main process is as follows:
+
+1. Registration operation: The clients client1, client2, client3, and client4 register themselves and the CSKey they want to monitor with the csserver through HTPP requests. The Service service obtains the callback engine instance through the external interface, and registers the client and its corresponding CSKeys.
+
+2. Update operation: If the ClientX node updates the CSKey content, the Service service updates the CSKey cached by the ContextCache, and the ContextCache delivers the update operation to the ListenerBus. The ListenerBus notifies the specific listener to consume (that is, the ContextKeyCallbackEngine updates the CSKeys corresponding to the Client). The consumed event will be automatically removed.
+
+3. Heartbeat mechanism:
+
+All clients use heartbeat information to detect whether the value of CSKeys in ContextKeyCallbackEngine has changed.
+
+ContextKeyCallbackEngine returns the updated CSKeys value to all registered clients through the heartbeat mechanism. If there is a client's heartbeat timeout, remove the client.
+
+### **Listener UM class diagram**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png)
+
+Interface: ListenerManager
+
+External: Provide ListenerBus for event delivery.
+
+Internally: provide a callback engine for specific event registration, access, update, and heartbeat processing logic
+
+## **Listener callbackengine timing diagram**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md
new file mode 100644
index 0000000..b57c8c7
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md
@@ -0,0 +1,8 @@
+## **CSPersistence Architecture**
+
+### Persistence UML diagram
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png)
+
+
+The Persistence module mainly defines ContextService persistence related operations. The entities mainly include CSID, ContextKeyValue, CSResource, and CSTable.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md
new file mode 100644
index 0000000..8dea6f2
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md
@@ -0,0 +1,127 @@
+## **CSSearch Architecture**
+### **Overall architecture**
+
+As shown below:
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png)
+
+1. ContextSearch: The query entry, accepts the query conditions defined in the Map form, and returns the corresponding results according to the conditions.
+
+2. Building module: Each condition type corresponds to a Parser, which is responsible for converting the condition in the form of Map into a Condition object, which is implemented by calling the logic of ConditionBuilder. Conditions with complex logical relationships will use ConditionOptimizer to optimize query plans based on cost-based algorithms.
+
+3. Execution module: Filter out the results that match the conditions from the Cache. According to different query targets, there are three execution modes: Ruler, Fetcher and Match. The specific logic is described later.
+
+4. Evaluation module: Responsible for calculation of conditional execution cost and statistics of historical execution status.
+
+### **Query Condition Definition (ContextSearchCondition)**
+
+A query condition specifies how to filter out the part that meets the condition from a ContextKeyValue collection. The query conditions can be used to form more complex query conditions through logical operations.
+
+1. Support ContextType, ContextScope, KeyWord matching
+
+    1. Corresponding to a Condition type
+
+    2. In Cache, these should have corresponding indexes
+
+2. Support contains/regex matching mode for key
+
+    1. ContainsContextSearchCondition: contains a string
+
+    2. RegexContextSearchCondition: match a regular expression
+
+3. Support logical operations of or, and and not
+
+    1. Unary operation UnaryContextSearchCondition:
+
+> Support logical operations of a single parameter, such as NotContextSearchCondition
+
+1. Binary operation BinaryContextSearchCondition:
+
+> Support the logical operation of two parameters, defined as LeftCondition and RightCondition, such as OrContextSearchCondition and AndContextSearchCondition
+
+1. Each logical operation corresponds to an implementation class of the above subclass
+
+2. The UML class diagram of this part is as follows:
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png)
+
+### **Construction of query conditions**
+
+1. Support construction through ContextSearchConditionBuilder: When constructing, if multiple ContextType, ContextScope, KeyWord, contains/regex matches are declared at the same time, they will be automatically connected by And logical operation
+
+2. Support logical operations between Conditions and return new Conditions: And, Or and Not (considering the form of condition1.or(condition2), the top-level interface of Condition is required to define logical operation methods)
+
+3. Support to build from Map through ContextSearchParser corresponding to each underlying implementation class
+
+### **Execution of query conditions**
+
+1. Three function modes of query conditions:
+
+    1. Ruler: Filter out eligible ContextKeyValue sub-Arrays from an Array
+
+    2. Matcher: Determine whether a single ContextKeyValue meets the conditions
+
+    3. Fetcher: Filter out an Array of eligible ContextKeyValue from ContextCache
+
+2. Each bottom-level Condition has a corresponding Execution, responsible for maintaining the corresponding Ruler, Matcher, and Fetcher.
+
+### **Query entry ContextSearch**
+
+Provide a search interface, receive Map as a parameter, and filter out the corresponding data from the Cache.
+
+1. Use Parser to convert the condition in the form of Map into a Condition object
+
+2. Obtain cost information through Optimizer, and determine the order of query according to the cost information
+
+3. After executing the corresponding Ruler/Fetcher/Matcher logic through the corresponding Execution, the search result is obtained
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png)
+
+### **Query Optimization**
+
+1. OptimizedContextSearchCondition maintains the Cost and Statistics information of the condition:
+
+    1. Cost information: CostCalculator is responsible for judging whether a certain Condition can calculate Cost, and if it can be calculated, it returns the corresponding Cost object
+
+    2. Statistics information: start/end/execution time, number of input lines, number of output lines
+
+2. Implement a CostContextSearchOptimizer, whose optimize method is based on the cost of the Condition to optimize the Condition and convert it into an OptimizedContextSearchCondition object. The specific logic is described as follows:
+
+    1. Disassemble a complex Condition into a tree structure based on the combination of logical operations. Each leaf node is a basic simple Condition; each non-leaf node is a logical operation.
+
+> Tree A as shown in the figure below is a complex condition composed of five simple conditions of ABCDE through various logical operations.
+
+![](./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png)
+<center>(Tree A)</center>
+
+1. The execution of these Conditions is actually depth first, traversing the tree from left to right. Moreover, according to the exchange rules of logical operations, the left and right order of the child nodes of a node in the Condition tree can be exchanged, so all possible trees in all possible execution orders can be enumerated.
+
+> Tree B as shown in the figure below is another possible sequence of tree A above, which is exactly the same as the execution result of tree A, except that the execution order of each part has been adjusted.
+
+![](./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png)
+<center>(Tree B)</center>
+
+1. For each tree, the cost is calculated from the leaf node and collected to the root node, which is the final cost of the tree, and finally the tree with the smallest cost is obtained as the optimal execution order.
+
+> The rules for calculating node cost are as follows:
+
+1. For leaf nodes, each node has two attributes: Cost and Weight. Cost is the cost calculated by CostCalculator. Weight is assigned according to the order of execution of the nodes. The current default is 1 on the left and 0.5 on the right. See how to adjust it later (the reason for assigning weight is that the conditions on the left have already been set in some cases. It can directly determine whether the entire combinatorial logic matches or not, so the condition on the right does not [...]
+
+2. For non-leaf nodes, Cost = the sum of Cost×Weight of all child nodes; the weight assignment logic is consistent with that of leaf nodes.
+
+> Taking tree A and tree B as examples, calculate the costs of these two trees respectively, as shown in the figure below, the number in the node is Cost\|Weight, assuming that the cost of the 5 simple conditions of ABCDE is 10, 100, 50 , 10, and 100. It can be concluded that the cost of tree B is less than that of tree A, which is a better solution.
+
+
+<center class="half">
+    <img src="./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png" width="300"> <img src="./../ ../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png" width="300">
+</center>
+
+1. Use CostCalculator to measure the cost of simple conditions:
+
+    1. The condition acting on the index: the cost is determined according to the distribution of the index value. For example, when the length of the Array obtained by condition A from the Cache is 100 and condition B is 200, then the cost of condition A is less than B.
+
+    2. Conditions that need to be traversed:
+
+        1. According to the matching mode of the condition itself, an initial Cost is given: For example, Regex is 100, Contains is 10, etc. (the specific values ​​etc. will be adjusted according to the situation when they are realized)
+
+        2. According to the efficiency of historical query, the real-time Cost is obtained after continuous adjustment on the basis of the initial Cost. Throughput per unit time
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md
new file mode 100644
index 0000000..05c6168
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md
@@ -0,0 +1,53 @@
+## **ContextService Architecture**
+
+### **Horizontal Division**
+
+Horizontally divided into three modules: Restful, Scheduler, Service
+
+#### Restful Responsibilities:
+
+    Encapsulate the request as httpjob and submit it to the Scheduler
+
+#### Scheduler Responsibilities:
+
+    Find the corresponding service through the ServiceName of the httpjob protocol to execute the job
+
+#### Service Responsibilities:
+
+    The module that actually executes the request logic, encapsulates the ResponseProtocol, and wakes up the wait thread in Restful
+
+### **Vertical Division**
+Vertically divided into 4 modules: Listener, History, ContextId, Context:
+
+#### Listener responsibilities:
+
+1. Responsible for the registration and binding of the client side (write to the database and register in the CallbackEngine)
+
+2. Heartbeat interface, return Array[ListenerCallback] through CallbackEngine
+
+#### History Responsibilities:
+Create and remove history, operate Persistence for DB persistence
+
+#### ContextId Responsibilities:
+Mainly docking with Persistence for ContextId creation, update and removal, etc.
+
+#### Context responsibility:
+
+1. For removal, reset and other methods, first operate Persistence for DB persistence, and update ContextCache
+
+2. Encapsulate the query condition and go to the ContextSearch module to obtain the corresponding ContextKeyValue data
+
+The steps for requesting access are as follows:
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png)
+
+## **UML Class Diagram**
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png)
+
+## **Scheduler thread model**
+
+Need to ensure that Restful's thread pool is not filled
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png)
+
+The sequence diagram is as follows:
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md
new file mode 100644
index 0000000..c6af94c
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md
@@ -0,0 +1,123 @@
+## **Background**
+
+### **What is Context**
+
+All necessary information to keep a certain operation going on. For example: reading three books at the same time, the page number of each book has been turned is the context of continuing to read the book.
+
+### **Why do you need CS (Context Service)?**
+
+CS is used to solve the problem of data and information sharing across multiple systems in a data application development process.
+
+For example, system B needs to use a piece of data generated by system A. The usual practice is as follows:
+
+1. B system calls the data access interface developed by A system;
+
+2. System B reads the data written by system A into a shared storage.
+
+With CS, the A and B systems only need to interact with the CS, write the data and information that need to be shared into the CS, and read the data and information that need to be read from the CS, without the need for an external system to develop and adapt. , Which greatly reduces the call complexity and coupling of information sharing between systems, and makes the boundaries of each system clearer.
+
+## **Product Range**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png)
+
+
+### Metadata context
+
+The metadata context defines the metadata specification.
+
+Metadata context relies on data middleware, and its main functions are as follows:
+
+1. Open up the relationship with the data middleware, and get all user metadata information (including Hive table metadata, online database table metadata, and other NOSQL metadata such as HBase, Kafka, etc.)
+
+2. When all nodes need to access metadata, including existing metadata and metadata in the application template, they must go through the metadata context. The metadata context records all metadata information used by the application template.
+
+3. The new metadata generated by each node must be registered with the metadata context.
+
+4. When the application template is extracted, the metadata context is abstracted for the application template (mainly, the multiple library tables used are made into \${db}. tables to avoid data permission problems) and all dependent metadata information is packaged.
+
+Metadata context is the basis of interactive workflows and the basis of application templates. Imagine: When Widget is defined, how to know the dimensions of each indicator defined by DataWrangler? How does Qualitis verify the graph report generated by Widget?
+
+### Data context
+
+The data context defines the data specification.
+
+The data context depends on data middleware and Linkis computing middleware. The main functions are as follows:
+
+1. Get through the data middleware and get all user data information.
+
+2. Get through the computing middleware and get the data storage information of all nodes.
+
+3. When all nodes need to write temporary results, they must pass through the data context and be uniformly allocated by the data context.
+
+4. When all nodes need to access data, they must pass the data context.
+
+5. The data context distinguishes between dependent data and generated data. When the application template is extracted, all dependent data is abstracted and packaged for the application template.
+
+### Resource context
+
+The resource context defines the resource specification.
+
+The resource context mainly interacts with Linkis computing middleware. The main functions are as follows:
+
+1. User resource files (such as Jar, Zip files, properties files, etc.)
+
+2. User UDF
+
+3. User algorithm package
+
+4. User script
+
+### Environmental context
+
+The environmental context defines the environmental specification.
+
+The main functions are as follows:
+
+1. Operating System
+
+2. Software, such as Hadoop, Spark, etc.
+
+3. Package dependencies, such as Mysql-JDBC.
+
+### Object context
+
+The runtime context is all the context information retained when the application template (workflow) is defined and executed.
+
+It is used to assist in defining the workflow/application template, prompting and perfecting all necessary information when the workflow/application template is executed.
+
+The runtime workflow is mainly used by Linkis.
+
+
+## **CS Architecture Diagram**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png)
+
+## **Architecture Description:**
+
+### 1. Client
+The entrance of external access to CS, Client module provides HA function;
+[Enter Client Architecture Design] (ContextService_Client.md)
+
+### 2. Service Module
+Provide a Restful interface to encapsulate and process CS requests submitted by the client;
+[Enter Service Architecture Design] (ContextService_Service.md)
+
+### 3. ContextSearch
+The context query module provides rich and powerful query capabilities for the client to find the key-value key-value pairs of the context;
+[Enter ContextSearch architecture design](ContextService_Search.md)
+
+### 4. Listener
+The CS listener module provides synchronous and asynchronous event consumption capabilities, and has the ability to notify the Client in real time once the Zookeeper-like Key-Value is updated;
+[Enter Listener architecture design](ContextService_Listener.md)
+
+### 5. ContextCache
+The context memory cache module provides the ability to quickly retrieve the context and the ability to monitor and clean up JVM memory usage;
+[Enter ContextCache architecture design] (ContextService_Cache.md)
+
+### 6. HighAvailable
+Provide CS high availability capability;
+[Enter HighAvailable architecture design](ContextService_HighAvailable.md)
+
+### 7. Persistence
+The persistence function of CS;
+[Enter Persistence architecture design](ContextService_Persistence.md)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/PublicService.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/PublicService.md
new file mode 100644
index 0000000..6224be1
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/PublicService.md
@@ -0,0 +1,34 @@
+
+## **Background**
+
+PublicService is a comprehensive service composed of multiple sub-modules such as "configuration", "jobhistory", "udf", "variable", etc. Linkis 
+1.0 added label management based on version 0.9. Linkis doesn't need to set the parameters every time during the execution of different jobs.
+Many variables, functions and configurations can be reused after the user completes the settings once, and of course that they can also be shared with other users.
+
+## **Architecture diagram**
+
+![Diagram](../../Images/Architecture/linkis-publicService-01.png)
+
+## **Architecture Introduction**
+
+1. linkis-configuration:Provides query and save operations for global settings and general settings, especially engine configuration parameters.
+
+2. linkis-jobhistory:Specially used for storage and query of historical execution task, users can obtain the historical tasks through the interface provided by "jobhistory", include logs, status and execution content.
+At the same time, the historical task also support the paging query operation.The administrator can view all the historical tasks, but the ordinary users can only view their own tasks.
+
+3. Linkis-udf:Provides the user function management capability in Linkis, which can be divided into shared functions, personal functions, system functions and the functions used by engine.
+Once the user selects one, it will be automatically loaded for users to directly quote in the code and reuse between different scripts when the engine starting. 
+
+4. Linkis-variable:Provides the global variable management capability in Linkis, store and query the user-defined global variables。
+
+5. linkis-instance-label:Provides two modules named "label server" and "label client" for labeling Engine and EM. It also provides node-based label addition, deletion, modification and query capabilities.
+The main functions are as follows:
+
+-   Provides resource management capabilities for some specific labels to assist RM in more refined resource management.
+
+-   Provides labeling capabilities for users. The user label will be automatically added for judgment when applying for the engine. 
+
+-   Provides the label analysis module, which can parse the users' request into a bunch of labels。
+
+-   With the ability of node label management, it is mainly used to provide the label  CRUD capability of the node and the label resource management to manage the resources of certain labels, marking the maximum resource, minimum resource and used resource of a Label.
+
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/README.md
new file mode 100644
index 0000000..c9ddf68
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/Public_Enhancement_Services/README.md
@@ -0,0 +1,91 @@
+PublicEnhencementService (PS) architecture design
+=====================================
+
+PublicEnhancementService (PS): Public enhancement service, a module that provides functions such as unified configuration management, context service, physical library, data source management, microservice management, and historical task query for other microservice modules.
+
+![](../../Images/Architecture/PublicEnhencementArchitecture.png)
+
+Introduction to the second-level module:
+==============
+
+BML material library
+---------
+
+It is the linkis material management system, which is mainly used to store various file data of users, including user scripts, resource files, third-party Jar packages, etc., and can also store class libraries that need to be used when the engine runs.
+
+| Core Class | Core Function |
+|-----------------|------------------------------------|
+| UploadService | Provide resource upload service |
+| DownloadService | Provide resource download service |
+| ResourceManager | Provides a unified management entry for uploading and downloading resources |
+| VersionManager | Provides resource version marking and version management functions |
+| ProjectManager | Provides project-level resource management and control capabilities |
+
+Unified configuration management
+-------------------------
+
+Configuration provides a "user-engine-application" three-level configuration management solution, which provides users with the function of configuring custom engine parameters under various access applications.
+
+| Core Class | Core Function |
+|----------------------|--------------------------------|
+| CategoryService | Provides management services for application and engine catalogs |
+| ConfigurationService | Provides a unified management service for user configuration |
+
+ContextService context service
+------------------------
+
+ContextService is used to solve the problem of data and information sharing across multiple systems in a data application development process.
+
+| Core Class | Core Function |
+|---------------------|------------------------------------------|
+| ContextCacheService | Provides a cache service for context information |
+| ContextClient | Provides the ability for other microservices to interact with the CSServer group |
+| ContextHAManager | Provide high-availability capabilities for ContextService |
+| ListenerManager | The ability to provide a message bus |
+| ContextSearch | Provides query entry |
+| ContextService | Implements the overall execution logic of the context service |
+
+Datasource data source management
+--------------------
+
+Datasource provides the ability to connect to different data sources for other microservices.
+
+| Core Class | Core Function |
+|-------------------|--------------------------|
+| datasource-server | Provide the ability to connect to different data sources |
+
+InstanceLabel microservice management
+-----------------------
+
+InstanceLabel provides registration and labeling functions for other microservices connected to linkis.
+
+| Core Class | Core Function |
+|-----------------|--------------------------------|
+| InsLabelService | Provides microservice registration and label management functions |
+
+Jobhistory historical task management
+----------------------
+
+Jobhistory provides users with linkis historical task query, progress, log display related functions, and provides a unified historical task view for administrators.
+
+| Core Class | Core Function |
+|------------------------|----------------------|
+| JobHistoryQueryService | Provide historical task query service |
+
+Variable user-defined variable management
+--------------------------
+
+Variable provides users with functions related to the storage and use of custom variables.
+
+| Core Class | Core Function |
+|-----------------|-------------------------------------|
+| VariableService | Provides functions related to the storage and use of custom variables |
+
+UDF user-defined function management
+---------------------
+
+UDF provides users with the function of custom functions, which can be introduced by users when writing code.
+
+| Core Class | Core Function |
+|------------|------------------------|
+| UDFService | Provide user-defined function service |
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Architecture_Documents/README.md b/Linkis-Doc-master/en_US/Architecture_Documents/README.md
new file mode 100644
index 0000000..7f5acde
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Architecture_Documents/README.md
@@ -0,0 +1,18 @@
+## 1. Document Structure
+
+Linkis 1.0 divides all microservices into three categories: public enhancement services, computing governance services, and microservice governance services. The following figure shows the architecture of Linkis 1.0.
+
+![Linkis1.0 Architecture Figure](./../Images/Architecture/Linkis1.0-architecture.png)
+
+The specific responsibilities of each category are as follows:
+
+1. Public enhancement services are the material library services, context services, data source services and public services that Linkis 0.X has provided.
+2. The microservice governance services are Spring Cloud Gateway, Eureka and Open Feign already provided by Linkis 0.X, and Linkis 1.0 will also provide support for Nacos
+3. Computing governance services are the core focus of Linkis 1.0, from submission, preparation to execution, overall three stages to comprehensively upgrade Linkis's ability to perform control over user tasks.
+
+The following is a directory listing of Linkis1.0 architecture documents:
+
+1. The characteristics of Linkis1.0's architecture , please read [The difference between Linkis1.0 and Linkis0.x](DifferenceBetween1.0&0.x.md).
+2. Linkis1.0 public enhancement service related documents, please read [Public Enhancement Service](Public_Enhancement_Services/README.md).
+3. Linkis1.0 microservice governance related documents, please read [Microservice Governance](Microservice_Governance_Services/README.md).
+4. Linkis1.0 computing governance service related documents, please read [Computation Governance Service](Computation_Governance_Services/README.md).
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Deployment_Documents/Cluster_Deployment.md b/Linkis-Doc-master/en_US/Deployment_Documents/Cluster_Deployment.md
new file mode 100644
index 0000000..57f3118
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Deployment_Documents/Cluster_Deployment.md
@@ -0,0 +1,98 @@
+Introduction to Distributed Deployment Scheme
+==================
+
+Linkis's stand-alone deployment is simple, but it cannot be used in a production environment, because too many processes on the same server will make the server too stressful. The choice of deployment plan is related to the company's user scale, user habits, and the number of simultaneous users of the cluster. Generally speaking, we will choose the deployment method based on the number of simultaneous users using Linkis and the user's preference for the execution engine.
+
+1.Multi-node deployment method reference
+------------------------------------------
+
+Linkis1.0 still maintains the SpringCloud-based microservice architecture, in which each microservice supports multiple active deployment schemes. Of course, different microservices play different roles in the system. Some microservices are called frequently, and more It may be in a high load situation. **On the machine where EngineConnManager is installed, the memory load of the machine will be relatively high because the user's engine process will be started, and the load of other type [...]
+
+EngineConnManager Total resources used = total memory + total number of cores =
+Number of people online at the same time \* (All types of engines occupy memory) \*maximum concurrency per user + number of people online at the same time \*
+(total memory occupied by all types of engine conns) \*maximum concurrency per user
+
+For example, when only spark, hive, and python engines are used and the maximum concurrency of a single user is 1, 50 people are used at the same time, Spark's driver memory is 1G, and Hive
+Client memory 1G, python client 1G, each engine uses 1 core, then it is 50 \*(1+1+1)G \*
+1 + 50 \*(1+1+1) cores\*1 = 150G memory + 150 CPU cores.
+
+During distributed deployment, the memory occupied by the microservice itself can be calculated according to each 2G memory. In the case of a large number of users, it is recommended to increase the memory of ps-publicservice to 6G, and it is recommended to reserve 10G of memory as a buffer.
+The following configuration assumes that **each user starts two engines at the same time as an example**. **For a machine with 64G memory**, the reference configuration is as follows:
+
+- 10-50 people online at the same time
+
+> **Server configuration recommended** 4 servers, named S1, S2, S3, S4
+
+| Service | Host name | Remark |
+|---------------|-----------|------------------|
+| cg-engineconnmanager | S1, S2 | Each machine is deployed separately |
+| Other services | S3, S4 | Eureka high availability deployment |
+
+- 50-100 people online at the same time
+
+> **Server configuration recommendation**: 6 servers, named S1, S2, S3, S4, S5, S6
+
+| Service | Host name | Remark |
+|----------------------|-----------|------------------|
+| cg-engineconnmanager | S1-S4 | Each machine is deployed separately |
+| Other services | S5, S6 | Eureka high availability deployment |
+
+- The number of simultaneous users 100-300
+
+**Recommended server configuration**: 12 servers, named S1, S2...S12
+
+| Service | Host name | Remark |
+|----------------------|-----------|------------------|
+| cg-engineconnmanager | S1-S10 | Each machine is deployed separately |
+| Other services | S11, S12 | Eureka high availability deployment |
+
+- 300-500 people at the same time
+
+> **Server configuration recommendation**: 20 servers, named S1, S2...S20
+
+| Service | Host name | Remark |
+|----------------------|-----------|-----------------|
+| cg-engineconnmanager | S1-S18 | Each machine is deployed separately |
+| Other services | S19, S20 | Eureka high-availability deployment, some microservices can be expanded if the request volume is tens of thousands, and the current active-active deployment can support thousands of users in the industry |
+
+- More than 500 users at the same time (estimated based on 800 people online at the same time)
+
+> **Server configuration recommendation**: 34 servers, named S1, S2...S34
+
+| Service | Host name | Remark |
+|----------------------|-----------|------------------------------|
+| cg-engineconnmanager | S1-S32 | Each machine is deployed separately |
+| Other services | S33, S34 | Eureka high-availability deployment, some microservices can be expanded if the request volume is tens of thousands, and the current active-active deployment can support thousands of users in the industry |
+
+2.Linkis microservices distributed deployment configuration parameters
+---------------------------------
+
+In linkis1.0, we have optimized and integrated the startup parameters. Some important startup parameters of each microservice are loaded through the conf/linkis-env.sh file, such as the microservice IP, port, registry address, etc. The way to modify the parameters has changed a little. Take the active-active deployment of the machines **server1 and server2** as an example, in order to allow eureka to register with each other.
+
+On the server1 machine, you need to change the value in **conf/linkis-env.sh**
+
+``
+EUREKA_URL=http://$EUREKA_INSTALL_IP:$EUREKA_PORT/eureka/
+``
+
+change into:
+
+``
+EUREKA_URL=http://$EUREKA_INSTALL_IP:$EUREKA_PORT/eureka/,http:/server2:port/eureka/
+``
+
+In the same way, on the server2 machine, you need to change the value in **conf/linkis-env.sh**
+
+``
+EUREKA_URL=http://$EUREKA_INSTALL_IP:$EUREKA_PORT/eureka/
+``
+
+change into:
+
+``
+EUREKA_URL=http://$EUREKA_INSTALL_IP:$EUREKA_PORT/eureka/,http:/server1:port/eureka/
+``
+
+After the modification, start the microservice, enter the eureka registration interface from the web side, you can see that the microservice has been successfully registered to eureka, and the DS
+Replicas will also display the replica nodes adjacent to the cluster.
+
diff --git a/Linkis-Doc-master/en_US/Deployment_Documents/EngineConnPlugin_installation_document.md b/Linkis-Doc-master/en_US/Deployment_Documents/EngineConnPlugin_installation_document.md
new file mode 100644
index 0000000..990f55b
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Deployment_Documents/EngineConnPlugin_installation_document.md
@@ -0,0 +1,82 @@
+EngineConnPlugin installation document
+===============================
+
+This article mainly introduces the use of Linkis EngineConnPlugins, mainly from the aspects of compilation and installation.
+
+## 1. Compilation and packaging of EngineConnPlugins
+
+After linkis1.0, the engine is managed by EngineConnManager, and the EngineConnPlugin (ECP) supports real-time effectiveness.
+In order to facilitate the EngineConnManager to be loaded into the corresponding EngineConnPlugin by labels, it needs to be packaged according to the following directory structure (take hive as an example):
+```
+hive: engine home directory, must be the name of the engine
+└── dist # Dependency and configuration required for engine startup, different versions of the engine need to be in this directory to prevent the corresponding version directory
+    └── v1.2.1 #Must start with ‘v’ and add engine version number ‘1.2.1’
+        └── conf # Configuration file directory required by the engine
+        └── lib # Dependency package required by EngineConnPlugin
+└── plugin #EngineConnPlugin directory, this directory is used for engine management service package engine startup command and resource application
+    └── 1.2.1 # Engine version
+        └── linkis-engineplugin-hive-1.0.0-RC1.jar #Engine module package (only need to place a separate engine package)
+```
+If you are adding a new engine, you can refer to hive's assembly configuration method, source code directory: linkis-engineconn-plugins/engineconn-plugins/hive/src/main/assembly/distribution.xml
+## 2. Engine Installation
+### 2.1 Plugin package installation
+1.First, confirm the dist directory of the engine: wds.linkis.engineconn.home (get the value of this parameter from ${LINKIS_HOME}/conf/linkis.properties), this parameter is used by EngineConnPluginServer to read the configuration file that the engine depends on And third-party Jar packages. If the parameter (wds.linkis.engineconn.dist.load.enable=true) is set, the engine in this directory will be automatically read and loaded into the Linkis BML (material library).
+
+2.Second, confirm the engine Jar package directory:
+wds.linkis.engineconn.plugin.loader.store.path, which is used by EngineConnPluginServer to read the actual implementation Jar of the engine.
+
+It is highly recommended to specify **wds.linkis.engineconn.home and wds.linkis.engineconn.plugin.loader.store.path as** the same directory, so that you can directly unzip the engine ZIP package exported by maven into this directory, such as: Place it in the ${LINKIS_HOME}/lib/linkis-engineconn-plugins directory.
+
+```
+${LINKIS_HOME}/lib/linkis-engineconn-plugins:
+└── hive
+    └── dist
+    └── plugin
+└── spark
+    └── dist
+    └── plugin
+```
+
+If the two parameters do not point to the same directory, you need to place the dist and plugin directories separately, as shown in the following example:
+
+```
+## dist directory
+${LINKIS_HOME}/lib/linkis-engineconn-plugins/dist:
+└── hive
+    └── dist
+└── spark
+    └── dist
+## plugin directory
+${LINKIS_HOME}/lib/linkis-engineconn-plugins/plugin:
+└── hive
+    └── plugin
+└── spark
+    └── plugin
+```
+### 2.2 Configuration modification of management console (optional)
+
+The configuration of the Linkis1.0 management console is managed according to the engine label. If the new engine has configuration parameters, you need to insert the corresponding configuration parameters in the Configuration, and you need to insert the parameters in three tables:
+
+```
+linkis_configuration_config_key: Insert the key and default values of the configuration parameters of the engin
+linkis_manager_label: Insert engine label such as hive-1.2.1
+linkis_configuration_category: Insert the catalog relationship of the engine
+linkis_configuration_config_value: Insert the configuration that the engine needs to display
+```
+
+If it is an existing engine and a new version is added, you can modify the version of the corresponding engine in the linkis_configuration_dml.sql file for execution
+
+### 2.3 Engine refresh
+
+1.	The engine supports real-time refresh. After the engine is placed in the corresponding directory, Linkis1.0 provides a method to load the engine without shutting down the server, and just send a request to the linkis-engineconn-plugin-server service through the restful interface, that is, the actual deployment of the service Ip+port, the request interface is http://ip:port/api/rest_j/v1/rpc/receiveAndReply, the request method is POST, the request body is {"method":"/enginePlugin/engin [...]
+
+2.	Restart refresh: the engine catalog can be forced to refresh by restarting
+
+```
+### cd to the sbin directory, restart linkis-engineconn-plugin-server
+cd /Linkis1.0.0/sbin
+## Execute linkis-daemon script
+sh linkis-daemon.sh restart linkis-engine-plugin-server
+```
+
+3.Check whether the engine refresh is successful: If you encounter problems during the refresh process and need to confirm whether the refresh is successful, you can check whether the last_update_time of the linkis_engine_conn_plugin_bml_resources table in the database is the time when the refresh is triggered.
diff --git "a/Linkis-Doc-master/en_US/Deployment_Documents/Images/\345\210\206\345\270\203\345\274\217\351\203\250\347\275\262\345\276\256\346\234\215\345\212\241.png" "b/Linkis-Doc-master/en_US/Deployment_Documents/Images/\345\210\206\345\270\203\345\274\217\351\203\250\347\275\262\345\276\256\346\234\215\345\212\241.png"
new file mode 100644
index 0000000..8cd86c5
Binary files /dev/null and "b/Linkis-Doc-master/en_US/Deployment_Documents/Images/\345\210\206\345\270\203\345\274\217\351\203\250\347\275\262\345\276\256\346\234\215\345\212\241.png" differ
diff --git a/Linkis-Doc-master/en_US/Deployment_Documents/Installation_Hierarchical_Structure.md b/Linkis-Doc-master/en_US/Deployment_Documents/Installation_Hierarchical_Structure.md
new file mode 100644
index 0000000..3873f0a
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Deployment_Documents/Installation_Hierarchical_Structure.md
@@ -0,0 +1,198 @@
+Installation directory structure
+============
+
+The directory structure of Linkis1.0 is very different from the 0.X version. Each microservice in 0.X has a root directory that exists independently. The main advantage of this directory structure is that it is easy to distinguish microservices and facilitate individual Microservices are managed, but there are some obvious problems:
+
+1.	The microservice catalog is too complicated and it is not convenient to switch catalog management
+2.	There is no unified startup script, which makes it more troublesome to start and stop microservices
+3.	There are a large number of duplicate service configurations, and the same configuration often needs to be modified in many places
+4.	There are a large number of repeated Lib dependencies, which increases the size of the installation package and the risk of dependency conflicts
+
+Therefore, in Linkis 1.0, we have greatly optimized and adjusted the installation directory structure, reducing the number of microservice directories, reducing the jar packages that are repeatedly dependent, and reusing configuration files and microservice management scripts as much as possible. Mainly reflected in the following aspects:
+
+1.The bin folder is no longer provided for each microservice, and modified to be shared by all microservices.
+> The Bin folder is modified to the installation directory, which is mainly used to install Linkis1.0 and check the environment status. The new sbin directory provides one-click start and stop for Linkis, and provides independent start and stop for all microservices by changing parameters.
+
+2.No longer provide a separate conf directory for each microservice, and modify it to be shared by all microservices.
+> The Conf folder contains two aspects of content. On the one hand, it is the configuration information shared by all microservices. This type of configuration information contains information that users can customize configuration according to their own environment; on the other hand, it is the special characteristics of each microservice. Configuration, under normal circumstances, users do not need to change by themselves.
+
+3.The lib folder is no longer provided for each microservice, and modified to be shared by all microservices
+> The Lib folder also contains two aspects of content, on the one hand, the common dependencies required by all microservices; on the other hand, the special dependencies required by each microservice.
+
+4.The log directory is no longer provided for each microservice, modified to be shared by all microservices
+> The Log directory contains log files of all microservices.
+
+The simplified directory structure of Linkis1.0 is as follows.
+
+````
+├── bin ──installation directory
+│ ├── checkEnv.sh ── Environmental variable detection
+│ ├── checkServices.sh ── Microservice status check
+│ ├── common.sh ── Some public shell functions
+│ ├── install-io.sh ── Used for dependency replacement during installation
+│ └── install.sh ── Main script of Linkis installation
+├── conf ──configuration directory
+│ ├── application-eureka.yml 
+│ ├── application-linkis.yml    ──Microservice general yml
+│ ├── linkis-cg-engineconnmanager-io.properties
+│ ├── linkis-cg-engineconnmanager.properties
+│ ├── linkis-cg-engineplugin.properties
+│ ├── linkis-cg-entrance.properties
+│ ├── linkis-cg-linkismanager.properties
+│ ├── linkis-computation-governance
+│ │   └── linkis-client
+│ │       └── linkis-cli
+│ │           ├── linkis-cli.properties
+│ │           └── log4j2.xml
+│ ├── linkis-env.sh   ──linkis environment properties
+│ ├── linkis-et-validator.properties
+│ ├── linkis-mg-gateway.properties
+│ ├── linkis.properties  ──linkis global properties
+│ ├── linkis-ps-bml.properties
+│ ├── linkis-ps-cs.properties
+│ ├── linkis-ps-datasource.properties
+│ ├── linkis-ps-publicservice.properties
+│ ├── log4j2.xml
+│ ├── proxy.properties(Optional)
+│ └── token.properties(Optional)
+├── db ──database DML and DDL file directory
+│ ├── linkis\_ddl.sql ──Database table definition SQL
+│ ├── linkis\_dml.sql ──Database table initialization SQL
+│ └── module ──Contains DML and DDL files of each microservice
+├── lib ──lib directory
+│ ├── linkis-commons ──Common dependency package
+│ ├── linkis-computation-governance ──The lib directory of the computing governance module
+│ ├── linkis-engineconn-plugins ──lib directory of all EngineConnPlugins
+│ ├── linkis-public-enhancements ──lib directory of public enhancement services
+│ └── linkis-spring-cloud-services ──SpringCloud lib directory
+├── logs ──log directory
+│ ├── linkis-cg-engineconnmanager-gc.log
+│ ├── linkis-cg-engineconnmanager.log
+│ ├── linkis-cg-engineconnmanager.out
+│ ├── linkis-cg-engineplugin-gc.log
+│ ├── linkis-cg-engineplugin.log
+│ ├── linkis-cg-engineplugin.out
+│ ├── linkis-cg-entrance-gc.log
+│ ├── linkis-cg-entrance.log
+│ ├── linkis-cg-entrance.out
+│ ├── linkis-cg-linkismanager-gc.log
+│ ├── linkis-cg-linkismanager.log
+│ ├── linkis-cg-linkismanager.out
+│ ├── linkis-et-validator-gc.log
+│ ├── linkis-et-validator.log
+│ ├── linkis-et-validator.out
+│ ├── linkis-mg-eureka-gc.log
+│ ├── linkis-mg-eureka.log
+│ ├── linkis-mg-eureka.out
+│ ├── linkis-mg-gateway-gc.log
+│ ├── linkis-mg-gateway.log
+│ ├── linkis-mg-gateway.out
+│ ├── linkis-ps-bml-gc.log
+│ ├── linkis-ps-bml.log
+│ ├── linkis-ps-bml.out
+│ ├── linkis-ps-cs-gc.log
+│ ├── linkis-ps-cs.log
+│ ├── linkis-ps-cs.out
+│ ├── linkis-ps-datasource-gc.log
+│ ├── linkis-ps-datasource.log
+│ ├── linkis-ps-datasource.out
+│ ├── linkis-ps-publicservice-gc.log
+│ ├── linkis-ps-publicservice.log
+│ └── linkis-ps-publicservice.out
+├── pid ──Process ID of all microservices
+│ ├── linkis\_cg-engineconnmanager.pid ──EngineConnManager microservice
+│ ├── linkis\_cg-engineconnplugin.pid ──EngineConnPlugin microservice
+│ ├── linkis\_cg-entrance.pid ──Engine entrance microservice
+│ ├── linkis\_cg-linkismanager.pid ──linkis manager microservice
+│ ├── linkis\_mg-eureka.pid ──eureka microservice
+│ ├── linkis\_mg-gateway.pid ──gateway microservice
+│ ├── linkis\_ps-bml.pid ──material library microservice
+│ ├── linkis\_ps-cs.pid ──Context microservice
+│ ├── linkis\_ps-datasource.pid ──Data source microservice
+│ └── linkis\_ps-publicservice.pid ──public microservice
+└── sbin ──microservice start and stop script directory
+    ├── ext ──Start and stop script directory of each microservice
+    ├── linkis-daemon.sh ── Quick start and stop, restart a single microservice script
+    ├── linkis-start-all.sh ── Start all microservice scripts with one click
+    └── linkis-stop-all.sh ── Stop all microservice scripts with one click
+````
+
+# Configuration item modification
+
+After executing the install.sh in the bin directory to complete the Linkis installation, you need to modify the configuration items. All configuration items are located in the con directory. Normally, you need to modify the three configurations of db.sh, linkis.properties, and linkis-env.sh For documentation, project installation and configuration, please refer to the article "Linkis1.0 Installation"
+
+# Microservice start and stop
+
+After modifying the configuration items, you can start the microservice in the sbin directory. The names of all microservices are as follows:
+
+````
+├── linkis-cg-engineconnmanager  ──engine management service
+├── linkis-cg-engineplugin  ──EngineConnPlugin management service
+├── linkis-cg-entrance  ──computing governance entrance service
+├── linkis-cg-linkismanager  ──computing governance management service
+├── linkis-mg-eureka  ──microservice registry service
+├── linkis-mg-gateway  ──Linkis gateway service
+├── linkis-ps-bml  ──material library service
+├── linkis-ps-cs  ──context service
+├── linkis-ps-datasource  ──data source service
+└── linkis-ps-publicservice  ──public service
+````
+**Microservice abbreviation**:
+
+| Abbreviation | Full English Name | Full Chinese Name |
+|------|-------------------------|------------|
+| cg | Computation Governance | Computing Governance |
+| mg | Microservice Covernance | Microservice Governance |
+| ps | Public Enhancement Service | Public Enhancement Service |
+
+In the past, to start and stop a single microservice, you need to enter the bin directory of each microservice and execute the start/stop script. When there are many microservices, it is troublesome to start and stop. A lot of additional directory switching operations are added. Linkis1.0 will all The scripts related to the start and stop of microservices are placed in the sbin directory, and only a single entry script needs to be executed.
+
+**Under the Linkis/sbin directory**:
+
+1.Start all microservices at once:
+
+````
+sh linkis-start-all.sh
+````
+
+2.Shut down all microservices at once
+
+````
+sh linkis-stop-all.sh
+````
+
+3.Start a single microservice (the service name needs to be removed from the linkis prefix, such as mg-eureka)
+````
+sh linkis-daemon.sh start service-name
+````
+For example: 
+````
+sh linkis-daemon.sh start mg-eureka
+````
+
+4.Shut down a single microservice
+````
+sh linkis-daemon.sh stop service-name
+````
+For example: 
+````
+sh linkis-daemon.sh stop mg-eureka
+````
+
+5.Restart a single microservice
+````
+sh linkis-daemon.sh restart service-name
+````
+For example: 
+````
+sh linkis-daemon.sh restart mg-eureka
+````
+
+6.View the status of a single microservice
+````
+sh linkis-daemon.sh status service-name
+````
+For example: 
+````
+sh linkis-daemon.sh status mg-eureka
+````
diff --git a/Linkis-Doc-master/en_US/Deployment_Documents/Quick_Deploy_Linkis1.0.md b/Linkis-Doc-master/en_US/Deployment_Documents/Quick_Deploy_Linkis1.0.md
new file mode 100644
index 0000000..b74dbd9
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Deployment_Documents/Quick_Deploy_Linkis1.0.md
@@ -0,0 +1,246 @@
+# Linkis1.0 Deployment document
+
+## Notes
+
+If you are new to Linkis, you can ignore this chapter, however, if you are already a Linkis user,  we recommend you reading the following article before installing or upgrading: [Brief introduction of the difference between Linkis1.0 and Linkis0.X](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/en_US/Architecture_Documents/DifferenceBetween1.0%260.x.md).
+
+Please note: Apart from the four EngineConnPlugins included in the Linkis1.0 installation package by default: Python/Shell/Hive/Spark. You can manually install other types of engines such as JDBC depending on your own needs. For details, please refer to EngineConnPlugin installation documents.
+
+Engines that Linkis1.0 has adapted by default are listed below:
+
+| Engine Type   | Adaptation Situation   | Included in official installation package |
+| ------------- | ---------------------- | ----------------------------------------- |
+| Python        | Adapted in 1.0         | Included                                  |
+| JDBC          | Adapted in 1.0         | **Not Included**                          |
+| Shell         | Adapted in 1.0         | Included                                  |
+| Hive          | Adapted in 1.0         | Included                                  |
+| Spark         | Adapted in 1.0         | Included                                  |
+| Pipeline      | Adapted in 1.0         | **Not Included**                          |
+| Presto        | **Not adapted in 1.0** | **Not Included**                          |
+| ElasticSearch | **Not adapted in 1.0** | **Not Included**                          |
+| Impala        | **Not adapted in 1.0** | **Not Included**                          |
+| MLSQL         | **Not adapted in 1.0** | **Not Included**                          |
+| TiSpark       | **Not adapted in 1.0** | **Not Included**                          |
+
+## 1. Determine your installation environment 
+
+The following is the dependency information for each engine.
+
+| Engine Type | Dependency                  | Special Instructions                                         |
+| ----------- | --------------------------- | ------------------------------------------------------------ |
+| Python      | Python Environment          | If the path of logs and result sets are configured as hdfs://, then the HDFS environment is needed. |
+| JDBC        | No dependency               | If the path of logs and result sets are configured as hdfs://, then the HDFS environment is needed. |
+| Shell       | No dependency               | If the path of logs and result sets are configured as hdfs://, then the HDFS environment is needed. |
+| Hive        | Hadoop and Hive Environment |                                                              |
+| Spark       | Hadoop/Hive/Spark           |                                                              |
+                                                         
+**Requirement: At least 3G memory is required to install Linkis. **
+                                                         
+The default JVM heap memory of each microservice is 512M, and the heap memory of each microservice can be adjusted uniformly by modifying `SERVER_HEAP_SIZE`.If your computer resources are small, we suggest to modify this parameter to 128M. as follows:
+
+```bash
+    vim ${LINKIS_HOME}/config/linkis-env.sh
+```
+
+```bash
+    # java application default jvm memory.
+    export SERVER_HEAP_SIZE="128M"
+```
+
+----
+
+## 2. Linkis environment preparation
+
+### a. Fundamental software installation
+
+The following softwares must be installed:
+
+- MySQL (5.5+), How to install MySQL
+- JDK (1.8.0_141 or higher) How to install JDK
+
+### b. Create user
+
+For example: **The deploy user is hadoop**.
+
+1. Create a deploy user on the machine for installation.
+
+```bash
+    sudo useradd hadoop  
+```
+
+2. Since the services of Linkis use  sudo -u {linux-user} to switch engines to execute jobs, the deploy user should have sudo permission and do not need to enter the password.
+
+```bash
+    vi /etc/sudoers
+```
+
+```text
+    hadoop  ALL=(ALL)       NOPASSWD: NOPASSWD: ALL
+```
+
+3. **Set the following global environment variables on each installation node so that Linkis can use Hadoop, Hive and Spark.**
+
+   Modify the .bash_rc of the deploy user, the command is as follows:
+
+```bash     
+    vim /home/hadoop/.bash_rc ##Take the deploy user hadoop as an example.
+```
+
+​		The following is an example of setting environment variables:
+
+```bash
+    #JDK
+    export JAVA_HOME=/nemo/jdk1.8.0_141
+
+    ##If you do not use Hive, Spark or other engines and do not rely on Hadoop as 			well,then there is no need to modify the following environment variables.
+    #HADOOP  
+    export HADOOP_HOME=/appcom/Install/hadoop
+    export HADOOP_CONF_DIR=/appcom/config/hadoop-config
+    #Hive
+    export HIVE_HOME=/appcom/Install/hive
+    export HIVE_CONF_DIR=/appcom/config/hive-config
+    #Spark
+    export SPARK_HOME=/appcom/Install/spark
+    export SPARK_CONF_DIR=/appcom/config/spark-config/spark-submit
+    export PYSPARK_ALLOW_INSECURE_GATEWAY=1  # Parameters must be added to Pyspark
+```
+
+4. **If you want to equip your Pyspark and Python with drawing functions, you need to install the drawing module on each installation node**. The command is as follows:
+
+```bash
+    python -m pip install matplotlib
+```
+
+### c. Preparing installation package
+
+Download the latest installation package from the Linkis release. ([Click here to enter the download page](https://github.com/WeBankFinTech/Linkis/releases))
+
+Decompress the installation package to the installation directory and modify the configuration of the decompressed file.
+
+```bash   
+    tar -xvf  wedatasphere-linkis-x.x.x-combined-package-dist.tar.gz
+```
+
+### d. Basic configuration modification(Do not rely on HDFS)
+
+```bash
+    vi config/linkis-env.sh
+```
+
+```properties
+
+    #SSH_PORT=22        #Specify SSH port. No need to configuer if the stand-alone version is installed
+    deployUser=hadoop      #Specify deploy user
+    LINKIS_INSTALL_HOME=/appcom/Install/Linkis    # Specify installation directory.
+    WORKSPACE_USER_ROOT_PATH=file:///tmp/hadoop    # Specify user root directory. Generally used to store user's script and log files, it's user's workspace. 
+    RESULT_SET_ROOT_PATH=file:///tmp/linkis   # The result set file path, used to store the result set files of the Job.
+	ENGINECONN_ROOT_PATH=/appcom/tmp #Store the installation path of ECP. A local directory where deploy user has write permission.
+    ENTRANCE_CONFIG_LOG_PATH=file:///tmp/linkis/  #Entrance's log path
+
+    ## LDAP configuration. Linkis only supports deploy user login by default, you need to configure the following parameters to support multi-user login.
+    #LDAP_URL=ldap://localhost:1389/ 
+    #LDAP_BASEDN=dc=webank,dc=com
+```
+
+### e. Basic configuration modification(Rely on HDFS/Hive/Spark)
+
+```bash
+     vi config/linkis-env.sh
+```
+
+```properties
+    SSH_PORT=22       #Specify SSH port. No need to configuer if the stand-alone version is installed
+    deployUser=hadoop      #Specify deploy user
+    WORKSPACE_USER_ROOT_PATH=file:///tmp/hadoop     #Specify user root directory. Generally used to store user's script and log files, it's user's workspace.
+    RESULT_SET_ROOT_PATH=hdfs:///tmp/linkis   # The result set file path, used to store the result set files of the Job.
+	ENGINECONN_ROOT_PATH=/appcom/tmp #Store the installation path of ECP. A local directory where deploy user has write permission.
+    ENTRANCE_CONFIG_LOG_PATH=hdfs:///tmp/linkis/  #Entrance's log path
+
+    #1.0 supports multi-Yarn clusters, therefore, YARN_RESTFUL_URL must be configured
+ 	YARN_RESTFUL_URL=http://127.0.0.1:8088  #URL of Yarn's ResourceManager
+
+    # If you want to use it with Scriptis, for CDH version of hive, you need to set the following parameters.(For the community version of Hive, you can leave out the following configuration.)
+    HIVE_META_URL=jdbc://...   #URL of Hive metadata database
+    HIVE_META_USER=   # username of the Hive metadata database 
+    HIVE_META_PASSWORD=    # password of the Hive metadata database
+    
+    # set the conf directory of hadoop/hive/spark
+    HADOOP_CONF_DIR=/appcom/config/hadoop-config  #hadoop's conf directory
+    HIVE_CONF_DIR=/appcom/config/hive-config   #hive's conf directory
+    SPARK_CONF_DIR=/appcom/config/spark-config #spark's conf directory
+
+    ## LDAP configuration. Linkis only supports deploy user login by default, you need to configure the following parameters to support multi-user login.
+    #LDAP_URL=ldap://localhost:1389/ 
+    #LDAP_BASEDN=dc=webank,dc=com
+    
+    ##If your spark version is not 2.4.3, you need to modify the following parameter:
+    #SPARK_VERSION=3.1.1
+
+    ##:If your hive version is not 1.2.1, you need to modify the following parameter:
+    #HIVE_VERSION=2.3.3
+```
+
+### f. Modify the database configuration
+
+```bash   
+    vi config/db.sh 
+```
+
+```properties    
+
+    # set the connection information of the database
+    # including ip address, database's name, username and port
+    # Mainly used to store user's customized variables, configuration parameters, UDFs, and samll functions, and to provide underlying storage of the JobHistory.
+    MYSQL_HOST=
+    MYSQL_PORT=
+    MYSQL_DB=
+    MYSQL_USER=
+    MYSQL_PASSWORD=
+```
+
+## 3. Installation and Startup
+
+### 1. Execute the installation script:
+
+```bash
+    sh bin/install.sh
+```
+
+### 2. Installation steps
+
+- The install.sh script will ask you whether to initialize the database and import the metadata. 
+
+It is possible that a user might repeatedly run the install.sh script and results in clearing all data in databases. Therefore, each time the install.sh is executed, user will be asked if they need to initialize the database and import the metadata.
+
+Please select yes on the **first installation**.
+
+**Please note: If you are upgrading the existing environment of Linkis from 0.X to 1.0, please do not choose yes directly,  refer to Linkis1.0 Upgrade Guide first.**
+
+### 3. Whether install successfully 
+
+You can check whether the installation is successful or not by viewing the logs printed on the console. 
+
+If there is an error message, check the specific reason for that error or refer to FAQ for help.
+
+### 4. Linkis quick startup
+
+(1). Start services
+
+Run the following commands on the installation directory to start all services.
+
+```bash  
+  sh sbin/linkis-start-all.sh
+```
+
+(2). Check if start successfully 
+
+You can check the startup status of the services on the Eureka, here is the way to check:
+
+Open http://${EUREKA_INSTALL_IP}:${EUREKA_PORT} on the browser and check if services have registered successfully. 
+
+If you have not specified EUREKA_INSTALL_IP and EUREKA_INSTALL_IP in config.sh, then the HTTP address is http://127.0.0.1:20303
+
+As shown in the figure below, if all of the following micro-services are registered on theEureka, it means that they've started successfully and are able to work.
+
+![Linkis1.0_Eureka](../Images/deployment/Linkis1.0_combined_eureka.png)
+
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Contributing.md b/Linkis-Doc-master/en_US/Development_Documents/Contributing.md
new file mode 100644
index 0000000..28ea896
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Contributing.md
@@ -0,0 +1,195 @@
+# Contributing
+
+Thank you very much for contributing to the Linkis project! Before participating in the contribution, please read the following guidelines carefully.
+
+## 1. Contribution category
+
+### 1.1 Bug feedback and fix
+
+We suggest that whether it is bug feedback or repair, you should create an issue first to describe the status of the bug in detail, so as to help the community to find and review issues and codes through issue records. Bug feedback issues usually need to include a complete description
+**Bug** information and reproducible scenarios, so that the community can quickly locate the cause of the bug and fix it. Opened issues that contain #bug label all need to be fixed.
+
+### 1.2 Functional communication, implementation and refactoring
+
+In the communication process, please elaborate the details, mechanisms and using scenarios of the new function(or refactoring). This can promote the function(or refactoring) to be implemented better and faster.
+If you plan to implement a major feature (or refactoring), be sure to communicate with the team through **Issue** or other methods, so that everyone can move forward in the most efficient way. An open Issue containing the #feature tag means that there are new functions need to be implemented. And open issues including #Enhancement tags always means that needs to be improved for refactoring.
+
+
+### 1.3 Issue Q&A
+
+Helping to answer the usage questions in the Issue is a very valuable way to contribute to the Linkis community; There will always be new users keeping coming in. While helping new users, you can also show your expertise.
+
+### 1.4 Documentation improvements
+
+Linkis User Manual Documents are maintained in the Linkis-Doc project of github, you can edit the markdown file in the project and improve the document by submit a pr.
+
+## 2. Contribution process
+
+### 2.1 Branch structure
+
+The Linkis source code may contain some temporary branches, but there are only three branches as followed that are really meaningful:
+
+```
+master: The source code of the last stable release, and occassionally may have several hotfix submissions
+branch-0.10.0: The latest stable version
+dev-1.0.0: Main development branch
+```
+
+### 2.2 Development Guidelines
+
+Linkis front-end and back-end code share the same code repository, but they are separated in development. Before embarking on development, please fork a copy of Linkis project to your own Github Repositories. When developing, please do it based on your own Github Repositories.
+
+We recommend cloning the dev-1.0.0 branch for development, so there will be much less conflicts on merging when submitting a PR to the Linkis main project
+Much smaller
+
+```
+git clone https://github.com/yourname/Linkis.git --branch dev-1.0.0
+```
+
+#### 2.2.1 Backend
+
+The user configuration is under the project root directory /config/, the project startup script and the upgrade patch script are under the project root directory /bin/.
+The back-end code and core configuration are in the server/ directory, and the log is in the project root directory /log/. 
+The root directory of the project mentioned here refers to the directory configured by the environment variable LINKIS_HOME, and the environment variable needs to be configured during the development of the IDE.
+For example, Idea regarding the priority of environment variable loading from  high to low: Environment configured in Run/Debug Configurations
+variables —> System environment variables cached by the IDE.
+
+**2.2.1.1** Directory structure
+
+```
+1. Script
+```
+```
+├── assembly-package/bin # script directory
+ ├── install.sh # One-click deployment script
+ ├── checkEnv.sh # Environment check script
+ └── common.sh # Common script function
+```
+```
+├── sbin # script directory
+ ├── linkis-daemon.sh # Single service start and stop, status detection script
+ ├── linkis-start-all.sh # One-click start script
+ ├── linkis-stop-all.sh # One-click stop script
+ └── ext # Separate service script directory
+    ├── linkis-xxx.sh # The startup script of a service
+    ├── linkis-xxx.sh
+    ├── ...
+```
+    
+```
+2. Configuration
+```
+```
+├── assembly-package/config # User configuration directory
+ ├── linkis-env.sh # Configuration variable settings for one-click deployment
+ ├── db.sh # One-click deployment database configuration
+```
+```
+3. Code directory structure
+See Linkis code directory structure for details
+4. Log directory
+```
+```
+├── logs # log root directory
+```
+**2.2.1.2** Environment variables
+
+
+```
+Configure system environment variable or IDE environment variable LINKIS_HOME, it is recommended to use IDE environment variable first.
+```
+**2.2.1.3** Database
+
+```
+1. Create the Linkis system database by yourself;
+2. Modify the corresponding information of the database in conf/db.sh and execute bin/install.sh or import directly on the database client
+db/linkis_*.sql.
+```
+
+**2.2.1.4** Configuration file
+
+Modify the application-linkis.yml file in the conf directory and the properties file corresponding to each microservice name to configure related properties.
+
+**2.2.1.5** Packaging
+
+```
+1. To package the project, you need to modify the version in /assembly/src/main/assembly/assembly.xml in the root directory, and then execute the following command in the root directory: mvn clean package;
+To package a single module, simply run mvn clean package directly in each module.
+```
+### 2.3 Pull Request Guidelines
+
+#### If you still don’t know how to initiate a PR to an open source project, please refer to this description
+
+```
+Whether it is bug fixes or new feature development, please submit a PR to the dev-1.0.0 branch.
+PR and submission name follow the principle of <type>(<scope>): <subject>. For details, please refer to Ruan Yifeng's article [Commitmessage and Change log Compilation Guide](http://www.ruanyifeng.com/blog/2016/01/commit_message_change_log.html).
+If the PR contains new features, the document update should be included in this PR.
+If this PR is not ready to merge, please add the [WIP] prefix to the head of the name (WIP = work-in-progress).
+All submissions to the dev-1.0.0 branch need to go through at least one review before they can be merged
+```
+### 2.4 Review Standard
+
+Before contributing code, you can find out what kind of submissions are popular in Review. Simply put, if a submission can bring as many gains as possible and as few side effects or risks as possible, then it will be reviewd and merged first. Submissions with high risk and low value are almost impossible to be merged, and may be rejected without even a chance to review. 
+
+**2.4.1** Gain
+
+```
+Fix the main cause of the bug
+Add or fix a feature or problem that a large number of users urgently need
+Simple and effective
+Easy to test, with test cases
+Reduce complexity and amount of code
+```
+
+#### Issues that have been discussed by the community and identified for improvement
+
+#### 2.4.2 Side effects and risks
+
+```
+Only fix the surface phenomenon of the bug
+Introduce new features with high complexity
+Add complexity to meet niche needs
+Change stable existing API or semantics
+Cause other functions to not operate normally
+Add a lot of dependencies
+Change the dependency version at will
+Submit a large amount of code or changes at once
+```
+**2.4.3 Reviewer** Note
+
+```
+Please use a constructive tone to write comments
+If you need to make changes by the submitter, please clearly state all the content that needs to be modified to complete the Pull Request
+If a PR is found to have brought new problems after the merger, the Reviewer needs to contact the PR author and communicate to resolve the problem.
+Question; if the PR author cannot be contacted, the Reviewer needs to restore the PR
+```
+## 3. advanced contribution
+
+### 3.1 About Committers (Collaborators)
+
+**3.1.1** How to become a **committer**
+
+If you have had a valuable PR for the Linkis code and it has been merged, you can contact the core development team through the official WeChat group
+Team applied to be the Committer of the Linkis project; the core development team and other Committers will vote together to decide whether or not allow you to join. If you get enough votes, you will become a Committer for the Linkis project.
+
+**3.1.2 Committer** Rights
+
+```
+You can join the official developer WeChat group, participate in discussions and make development plans
+Can manage Issues, including closing and adding tags
+Can create and manage project branches, except for master and dev-1.0.0 branches
+Can review the PR submitted to the dev-1.0.0 branch
+Can apply to be a member of Committee
+```
+### 3.2 About Committee
+
+**3.2.1** How to become a **Committee** member
+
+
+If you are a Committer of the Linkis project, and all your contributions have been recognized by other Committee members. Yes, you can apply to be a member of the Linkis Committee, and other Committee members will vote together to decide whether to allow you to join in, and if unanimously approved, you will become a member of the Linkis Committee.
+
+**3.2.2 Committee members' rights
+
+```
+You can merge PRs submitted by other Committers and contributors to the dev-1.0.0 branch
+```
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/API.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/API.md
new file mode 100644
index 0000000..f91f8ba
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/API.md
@@ -0,0 +1,143 @@
+ > When Contributor contributes new RESTful interfaces to Linkis, it is required to follow the following interface specifications for interface development.
+
+
+
+## 1. HTTP or WebSocket ?
+
+
+
+Linkis currently provides two interfaces: HTTP and WebSocket.
+
+
+
+WebSocket advantages over HTTP:
+
+
+
+- Less stress on the server
+
+- More timely information push
+
+- Interactivity is more friendly
+
+
+
+Correspondingly, WebSocket has the following disadvantages:
+
+
+
+- The WebSocket may be disconnected while using
+
+- Higher technical requirements on the front end
+
+- It is generally required to have a front-end degradation handling mechanism
+
+
+
+**We generally strongly recommend that Contributor provide the interface using WebSocket as little as possible if not necessary;**
+
+
+
+**If you think it is necessary to use WebSocket and are willing to contribute the developed functions to Linkis, we suggest you communicate with us before the development, thank you!**
+
+
+
+## 2. URL specification
+
+
+
+```
+
+/api/rest_j/v1/{applicationName}/.+
+
+/api/rest_s/v1/{applicationName}/.+
+
+```
+
+
+
+**Convention** :
+
+
+
+- rest_j indicates that the interface complies with the Jersey specification
+
+- REST_S indicates that the interface complies with the SpringMVC REST specification
+
+- v1 is the version number of the service. ** version number will be updated with the Linkis version **
+
+- {applicationName} is the name of the micro-service
+
+
+
+## 3. Interface request format
+
+
+
+```json
+
+{
+
+"method":"/api/rest_j/v1/entrance/execute",
+
+"data":{},
+
+"WebsocketTag" : "37 fcbd8b762d465a0c870684a0261c6e" / / WebSocket requests require this parameter, HTTP requests can ignore
+
+}
+
+```
+
+
+
+**Convention** :
+
+
+
+- method: The requested RESTful API URL.
+
+- data: The specific data requested.
+
+- WebSocketTag: The unique identity of a WebSocket request. This parameter is also returned by the back end for the front end to identify.
+
+
+
+## 4. Interface response format
+
+
+
+```json
+
+{" method ":"/API/rest_j/v1 / project/create ", "status" : 0, "message" : "creating success!" ,"data":{}}
+
+```
+
+
+
+**Convention** :
+
+
+
+- method: Returns the requested RESTful API URL, mainly for the WebSocket mode.
+
+- status: Returns status information, where: -1 means not login, 0 means success, 1 means error, 2 means failed validation, and 3 means no access to the interface.
+
+- data: Returns the specific data.
+
+- message: Returns a prompt message for the request. If status is not 0, message will return an error message, where data may have a stack trace field, and return the specific stack information.
+
+
+
+In addition: Different status cause different HTTP status code, under normal circumstances:
+
+
+
+- When status is 0, the HTTP status code is 200
+
+- When the status is -1, the HTTP status code is 401
+
+- When status is 1, the HTTP status code is 400
+
+- When status is 2, the HTTP status code is 412
+
+- When status is 3, the HTTP status code is 403
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Concurrent.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Concurrent.md
new file mode 100644
index 0000000..8adf0d0
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Concurrent.md
@@ -0,0 +1,17 @@
+1. [**Compulsory**] Make sure getting a singleton object to be thread-safe. Operating inside singletons should also be kept thread-safe.
+
+
+
+2. [**Compulsory**] Thread resources must be provided through the thread pool, and it is not allowed to explicitly create threads in the application.
+
+
+
+3. SimpleDateFormat is a thread-unsafe class. It is recommended to use the DataUtils utility class.
+
+
+
+4. [**Compulsory**] At high concurrency, synchronous calls should consider the performance cost of locking. If you can use lockless data structures, don't use locks. If you can lock blocks, don't lock the whole method body. If you can use object locks, don't use class locks.
+
+
+
+5. [**Compulsory**] Use ThreadLocal as less as possible. Everytime using ThreadLocal and it holds an object which needs to be closed, remember to close it to release.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Catch.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Catch.md
new file mode 100644
index 0000000..b1a0030
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Catch.md
@@ -0,0 +1,9 @@
+1. [**Mandatory**] For the exception of each small module, a special exception class should be defined to facilitate the subsequent generation of error codes for users. It is not allowed to throw any RuntimeException or directly throw Exception.
+
+2. Try not to try-catch a large section of code. This is irresponsible. Please distinguish between stable code and non-stable code when catching. Stable code refers to code that will not go wrong anyway. For the catch of unstable code, try to distinguish the exception types as much as possible, and then do the corresponding exception handling.
+
+3. [**Mandatory**] The purpose of catching an exception is to handle it. Don't throw it away without handling it. If you don't want to handle it, please throw the exception to its caller. Note: Do not use e.printStackTrace() under any circumstances! The outermost business users must deal with exceptions and turn them into content that users can understand.
+
+4. The finally block must close the resource object and the stream object, and try-catch if there is an exception.
+
+5. [**Mandatory**] Prevent NullPointerException. The return value of the method can be null, and it is not mandatory to return an empty collection, or an empty object, etc., but a comment must be added to fully explain under what circumstances the null value will be returned. RPC and SpringCloud Feign calls all require non-empty judgments.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Throws.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Throws.md
new file mode 100644
index 0000000..ac8ed72
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Exception_Throws.md
@@ -0,0 +1,52 @@
+## How to define a new exception?
+
+
+
+- Customized exceptions must inherit one of LinkisretryException, WarnException, ErroException, or FatalException
+
+
+
+- Customized exceptions must contain error codes and error descriptions. If necessary, the IP address and process port where the exception occurred can also be encapsulated in the exception
+
+
+
+- Be careful with WarnException! An exception thrown by WarnException, if caught in a RESTful or RPC Receiver, does not throw a failure to the front end or sender, but only returns a warning message!
+
+
+
+- WarnException has an exception level of 1, ErroException has an exception level of 2, FatalException has an exception level of 3, and LinkisretryException has an exception level of 4
+
+
+
+| exception class| service |  error code  | error description|
+|:----  |:---   |:---   |:---   |
+| LinkisException | common | None | top level parent class inherited from the Exception, does not allow direct inheritance |
+| LinkisRuntimeException | common | None | top level parent class, inherited from RuntimeException, does not allow direct inheritance
+| WarnException | common | None | secondary level parent classes, inherit from LinkisRuntimeException. Warn level exception, must inherit this class directly or indirectly |
+| ErrorException | common | None | secondary level parent classes, inherited from LinkisException. Error exception, must inherit this class directly or indirectly |
+| FatalException | common | None | secondary level parent classes, inherited from LinkisException. Fatal level exception, must inherit this class directly or indirectly |
+| LinkisRetryException | common | None | secondary level parent classes, inherited from LinkisException. Retryable exceptions, must inherit this class directly or indirectly |
+
+
+
+## Module exception specification
+
+
+
+linkis-commons:10000-11000
+
+linkis-computattion-governace:11000-12000
+
+linkis-engineconn-plugins:12000-13000
+
+linkis-orchestrator:13000-14000
+
+linkis-public-enghancements:14000-15000
+
+linkis-spring-cloud-service:15100-15500
+
+linkis-extensions:15500-16000
+
+linkis-tuning:16100-16200
+
+linkis-user-control:16200-16300
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Log.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Log.md
new file mode 100644
index 0000000..34801bd
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Log.md
@@ -0,0 +1,13 @@
+1.	[**Convention**] Linkis chooses SLF4J and Log4J2 as the log printing framework, removing the logback in the Spring-Cloud package. Since SLF4J will randomly select a logging framework for binding, it is necessary to exclude bridging packages such as SLF4J-LOG4J after introducing new Maven packages in the future, otherwise log printing will be a problem. However, if the newly introduced Maven package depends on a package such as Log4J, do not exclude, otherwise the code may run with an error.
+
+2.	[**Configuration**] The log4j2 configuration file is default to log4j2.xml and needs to be placed in the classpath. If springcloud combination is needed, "logging:config:classpath:log4j2-spring.xml"(the location of the configuration file) can be added to application.yml.
+
+3.	[**Compulsory**] The API of the logging system (log4j2, Log4j, Logback) cannot be used directly in the class. For Scala code, force inheritance from Logging traits is required. For Java, use LoggerFactory.GetLogger(getClass).
+
+4.	[**Development Convention**] Since engineConn is started by engineConnManager from the command line, we specify the path of the log configuration file on the command line, and also modify the log configuration during the code execution. In particular, redirect the engineConn log to the system's standard out. So the log configuration file for the EngineConn convention is defined in the EnginePlugin and named log4j2-engineConn.xml (this is the convention name and cannot be changed).
+
+5.	[**Compulsory**] Strictly differentiate log levels. Fatal logs should be thrown and exited using System.out(-1) when the SpringCloud application is initialized. Error-level exceptions are those that developers must care about and handle. Do not use them casually. The WARN level is the logs of user action exceptions and some logs to troubleshoot bugs later. INFO is the key process log. Debug is a mode log, write as little as possible.
+
+6.	[**Compulsory**] Requirements: Every module must have INFO level log; Every key process must have INFO level log. The daemon thread must have a WARN level log to clean up resources, etc.
+
+7.	[**Compulsory**] Exception information should include two types of information: crime scene information and exception stack information. If not, then throw it by keyword. Example: logger.error(Parameters/Objects.toString + "_" + e.getMessage(), e);
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Path_Usage.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Path_Usage.md
new file mode 100644
index 0000000..b9c17d3
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/Path_Usage.md
@@ -0,0 +1,15 @@
+Please note: Linkis provides a unified Storage module, so you must follow the Linkis path specification when using the path or configuring the path in the configuration file.
+
+
+
+1. [**Compulsory**]When using a file path, whether it is local, HDFS, or HTTP, the schema information must be included. Among them:
+
+    - The Scheme header for local file is: file:///;
+
+    - The Scheme header for HDFS is: hdfs:///;
+
+    - The Scheme header for HTTP is: http:///.
+
+
+
+2. There should be no special characters in the path. Try to use the combination of English characters, underline and numbers.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/README.md b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/README.md
new file mode 100644
index 0000000..bde3f2d
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Development_Specification/README.md
@@ -0,0 +1,9 @@
+In order to standardize Linkis's community development environment, improve the output quality of subsequent development iterations of Linkis, and standardize the entire development and design process of Linkis, it is strongly recommended that Contributors follow the following development specifications:
+- [Exception Handling Specification](./Exception_Catch.md)
+- [Throwing exception specification](./Exception_Throws.md)
+- [Interface Specification](./Development_Specification/API.md)
+- [Log constraint specification](./Development_Specification/Log.md)
+- [Concurrency Specification](./Concurrent.md)
+- [Path Specification](./Path_Usage.md)
+
+**Note**: The development specifications of the initial version of Linkis1.0 are relatively brief, and will continue to be supplemented and improved with the iteration of Linkis. Contributors are welcome to provide their own opinions and comments.
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compilation_Document.md b/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compilation_Document.md
new file mode 100644
index 0000000..ee8b1c6
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compilation_Document.md
@@ -0,0 +1,135 @@
+# Linkis compilation document
+
+## Directory
+
+- 1. How to compile the whole project of Linkis.
+- 2. How to compile a module.
+- 3. How to compile an engine.
+- 4. How to modify the version of Hadoop, Hive and Spark that Linkis depends on.
+
+## 1. Compile the whole project
+
+Environment requirements: The version of JDK must be **higher than JDK8**, both **Oracle/Sun** and **OpenJDK** are supported.
+
+After cloning the project from github, please use maven to compile the project. 
+
+**Please note**: We recommend you using Hadoop-2.7.2, Hive-1.2.1, Spark-2.4.3 and Scala-2.11.8 to compile the Linkis.
+
+If you want to use other version of Hadoop, Hive and Spark, please refer to: How to modify the version of Hadoop, Hive and Spark that Linkis depends on.
+
+(1) **If you are compiling the Linkis on your local machine for the first time, you must execute the following commands on the root directory beforehand:**
+
+```bash
+    cd wedatasphere-linkis-x.x.x
+    mvn -N  install
+```
+
+(2) Execute the following commands on the root directory:
+
+```bash
+    cd wedatasphere-linkis-x.x.x
+    mvn clean install
+```
+
+(3) Obtain installation package from the directory 'assembly-> target':
+
+```bash
+    ls wedatasphere-linkis-x.x.x/assembly/target/wedatasphere-linkis-x.x.x-dist.tar.gz
+```
+
+## 2. Compile a module
+
+After cloning project from github, please use maven to compile the project. 
+
+(1) **If you are compiling the Linkis on your local machine for the first time, you must execute the following commands on the root directory beforehand:**
+
+```bash
+    cd wedatasphere-linkis-x.x.x
+    mvn -N  install
+```
+
+(2) Switch to the corresponding module to compile. An example of compiling Entrance module is shown below.
+
+```bash   
+    cd wedatasphere-linkis-x.x.x/linkis-computation-governance/linkis-entrance
+    mvn clean install
+```
+
+(3) Obtain compiled installation package from 'target' directory in the corresponding module.
+
+```
+    ls wedatasphere-linkis-x.x.x/linkis-computation-governance/linkis-entrance/target/linkis-entrance.x.x.x.jar
+```
+
+## 3. Compile an engine
+
+An example of compiling the Spark engine is shown below:
+
+(1) **If you are compiling the Linkis on your local machine for the first time, you must execute the following commands on the root directory beforehand:**
+
+```bash
+    cd wedatasphere-linkis-x.x.x
+    mvn -N  install
+```
+
+(2) Switch to the directory where Spark engine locates and use the following commands to compile:
+
+```bash   
+    cd wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark
+    mvn clean install
+```
+
+(3) Obtained compiled installation package from 'target' directory in the corresponding module.
+
+```
+    ls wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark/target/linkis-engineplugin-spark-x.x.x.zip
+```
+
+How to install Spark engine separately? Please refer to Linkis EngineConnPlugin installation document.
+
+## 4. How to modify the version of Hadoop, Hive and Spark that Linkis depends on
+
+Please note: Since Hadoop is a fundamental service in big data area, Linkis must rely on it for compilation, while computing storage engines such as Spark and Hive are not. If you do not have requirements for a certain engine, then no need to set its engine version or compile its EngineConnPlugin.
+
+The way to modify the version of Hadoop is different from that of Spark, Hive and other computation engines. Please see instructions below:
+
+#### How to modify the version of Hadoop that Linkis relies on?
+
+Enter the root directory of the Linkis and manually modified the Hadoop version in pom.xml.
+
+```bash
+    cd wedatasphere-linkis-x.x.x
+    vim pom.xml
+```
+
+```xml
+    <properties>
+      
+        <hadoop.version>2.7.2</hadoop.version> <!--> Modify Hadoop version here <-->
+              
+        <scala.version>2.11.8</scala.version>
+        <jdk.compile.version>1.8</jdk.compile.version>
+              
+    </properties>
+```
+
+#### How to modify the version of Spark, Hive that Linkis relies on?
+
+Here is an example of modifying Spark version. Enter the directory where Spark engine locates and manually modify the Spark version in pom.xml.
+
+```bash
+    cd wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark
+    vim pom.xml
+```
+
+```xml
+    <properties>
+      
+        <spark.version>2.4.3</spark.version>  <!--> Modify Spark version here <-->
+              
+    </properties>
+```
+
+Modifying  the version of other engines is similar to that of Spark. Enter the directory where  engine locates and manually modify the version in pom.xml.
+
+Then, please refer to How to compile an engine.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compile_and_Package.md b/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compile_and_Package.md
new file mode 100644
index 0000000..52928bf
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Linkis_Compile_and_Package.md
@@ -0,0 +1,155 @@
+# Linkis Compilation Document
+
+## directory
+
+- [1. Fully compile Linkis](#1-Fully-compile-Linkis)
+
+- [2. Build a single module](#2-Build-a-single-module)
+
+- [3. Build an engine](#3-Build-an-engine)
+
+- [4. How to Modify Linkis dependency versions of Hadoop, Hive, Spark](#4-How-to-Modify-Linkis-dependency-versions-of-Hadoop,-Hive,-Spark)
+
+## 1. Fully compile Linkis
+
+**Environment requirements:** Version of JDK must be higher then **JDK8**,  **Oracle/Sun** and **OpenJDK** are both supported.
+
+After getting the project code from Git, compile the project installation package using Maven.
+
+**Notice** : The official recommended versions for compiling Linkis are hadoop-2.7.2, hive-1.2.1, spark-2.4.3, and Scala-2.11.8.
+
+If you want to compile Linkis with another version of Hadoop, Hive, Spark, please refer to: [How to Modify Linkis dependency of Hadoop, Hive, Spark](#4 How to Modify Linkis dependency versionof Hadoop, Hive, Spark)
+
+(1) **If you compile it locally for the first time, you must execute the following command ** in the source package root directory of Linkis:**
+
+```bash
+cd wedatasphere-linkis-x.x.x
+mvn -N  install
+```
+
+(2) Execute the following command in the source package root directory of Linkis:
+
+```bash
+cd wedatasphere-linkis-x.x.x
+mvn clean install
+```
+
+(3) Get the installation package, in the project assembly->target directory:
+
+```bash
+ls wedatasphere-linkis-x.x.x/assembly/target/wedatasphere-linkis-x.x.x-dist.tar.gz
+```
+
+## 2. Compile a single module
+
+After getting the project code from Git, use Maven to package the project installation package.
+
+(1) **If you use it locally for the first time, you must execute the following command** in the source package root directory of Linkis:
+
+```bash
+cd wedatasphere-linkis-x.x.x
+mvn -N  install
+```
+
+(2) Go to the corresponding module for compilation. For example, if you want to recompile the Entrance, command as follows:
+
+```bash
+cd wedatasphere-linkis-x.x.x/linkis-computation-governance/linkis-entrance
+mvn clean install
+```
+
+(3) Get the installation package. The compiled package will be found in the ->target directory of the corresponding module:
+
+```
+ls wedatasphere-linkis-x.x.x/linkis-computation-governance/linkis-entrance/target/linkis-entrance.x.x.x.jar
+```
+
+## 3. Build an engine
+
+Here's an example of the Spark engine that builds Linkis:
+
+(1) **If you use it locally for the first time, you must execute the following command** in the source package root directory of Linkis:
+
+```bash
+cd wedatasphere-linkis-x.x.x
+mvn -N  install
+```
+
+(2) Jump to the directory where the Spark engine is located for compilation and packaging. The command is as follows:
+
+```bash
+cd wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark
+mvn clean install
+```
+
+(3) Get the installation package. The compiled package will be found in the ->target directory of the corresponding module:
+
+```
+ls  wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark/target/linkis-engineplugin-spark-x.x.x.zip
+```
+
+How do I install the Spark engine separately? Please refer to [Linkis Engine Plug-in Installation Documentation](.. / Deployment_Documents EngineConnPlugin engine plug-in installation document. Md)
+
+## 4. How to Modify Linkis dependency versions of Hadoop, Hive, Spark
+
+Please note: Hadoop is a big data basic service, Linkis must rely on Hadoop for compilation;
+If you don't want to use an engine, you don't need to set the version of the engine or compile the engine plug-in.
+
+Specifically, the version of Hadoop can be modified in a different way than Spark, Hive, and other computing engines, as described below:
+
+#### How do I modify the version of Hadoop that Linkis relies on?
+
+Enter the source package root directory of Linkis, and manually modify the Hadoop version information of the pom.xml file, as follows:
+
+```bash
+cd wedatasphere-linkis-x.x.x
+vim pom.xml
+```
+
+```xml
+<properties>
+    <hadoop.version>2.7.2</hadoop.version> <!--Change version of hadoop here-->
+    <scala.version>2.11.8</scala.version>
+    <jdk.compile.version>1.8</jdk.compile.version>
+ </properties>
+
+```
+
+**Please note: If your hadoop version is hadoop3, you need to modify the pom file of linkis-hadoop-common**
+Because under hadoop2.8, hdfs-related classes are in the hadoop-hdfs module, but in hadoop 3.X the corresponding classes are moved to the module hadoop-hdfs-client, you need to modify this file:
+
+```
+pom:Linkis/linkis-commons/linkis-hadoop-common/pom.xml
+Modify the dependency hadoop-hdfs to hadoop-hdfs-client:
+  <dependency>
+             <groupId>org.apache.hadoop</groupId>
+             <artifactId>hadoop-hdfs</artifactId> <!-- Replace this line with <artifactId>hadoop-hdfs-client</artifactId>-->
+             <version>${hadoop.version}</version>
+             ...
+  Modify hadoop-hdfs to:
+   <dependency>
+             <groupId>org.apache.hadoop</groupId>
+             <artifactId>hadoop-hdfs-client</artifactId>
+             <version>${hadoop.version}</version>
+             ...
+```
+
+#### How to modify Spark, Hive versions that Linkis relies on?
+
+Here's an example of changing the version of Spark. Go to the directory where the Spark engine is located and manually modify the Spark version information of the pom.xml file as follows:
+
+```bash
+cd wedatasphere-linkis-x.x.x/linkis-engineconn-plugins/engineconn-plugins/spark
+vim pom.xml
+```
+
+```xml
+<properties>
+    <spark.version>2.4.3</spark.version> <!-- Change the Spark version number here -->
+ </properties>
+
+```
+
+Modifying the version of another engine is similar to changing the Spark version by going to the directory where the engine is located and manually changing the engine version information in the pom.xml file.
+
+Then refer to  [Build an engine](#3 Build an engine).
diff --git a/Linkis-Doc-master/en_US/Development_Documents/Linkis_DEBUG.md b/Linkis-Doc-master/en_US/Development_Documents/Linkis_DEBUG.md
new file mode 100644
index 0000000..34e1a88
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/Linkis_DEBUG.md
@@ -0,0 +1,141 @@
+## 1 Preface
+&nbsp; &nbsp; &nbsp; &nbsp; Every Linkis micro service supports debugging, most of them support local debugging, some of them only support remote debugging.
+
+1. Services that support local debugging
+- linkis-mg-eureka: set of debugging Main class is `com.webank.Wedatasphere.Linkis.Eureka.SpringCloudEurekaApplication`
+- Other Linkis microservices have their own Main classes, as shown below
+linkis-cg-manager: `com.webank.wedatasphere.linkis.manager.am.LinkisManagerApplication`
+linkis-ps-bml: `com.webank.wedatasphere.linkis.bml.LinkisBMLApplication`
+linkis-ps-cs: `com.webank.wedatasphere.linkis.cs.server.LinkisCSApplication`
+linkis-cg-engineconnmanager: `com.webank.wedatasphere.linkis.ecm.server.LinkisECMApplication`
+linkis-cg-engineplugin: `com.webank.wedatasphere.linkis.engineplugin.server.LinkisEngineConnPluginServer`
+linkis-cg-entrance: `com.webank.wedatasphere.linkis.entrance.LinkisEntranceApplication`
+linkis-ps-publicservice: `com.webank.wedatasphere.linkis.jobhistory.LinkisPublicServiceAppp`
+linkis-ps-datasource: `com.webank.wedatasphere.linkis.metadata.LinkisDataSourceApplication`
+linkis-mg-gateway: `com.webank.wedatasphere.linkis.gateway.springcloud.LinkisGatewayApplication`
+
+2. Services that only support remote debugging:
+The EngineConnManager service and the Engine service started by ECM only support remote debugging.
+
+## 2. Local debugging service steps
+&nbsp; &nbsp; &nbsp; &nbsp; Linkis and DSS both rely on Eureka for their services, so you need to start the Eureka service first. The Eureka service can also use the Eureka that you have already started. Once Eureka is started, you can start other services.
+
+2.1 Eureka service start
+1. If you do not want the default port 20303, you can modify the port configuration:
+
+```yml
+File path: conf/application-eureka.yml
+Port to be modified in config file:
+
+server:
+    Port: 8080 # Port to setup
+```
+
+2. Then to add debug configuration in IDEA
+
+You can do this by clicking Run or by clicking Add Configuration in the image below
+
+![01](../Images/Tunning_and_Troubleshooting/debug-01.png)
+
+3. Then click Add Application and modify the information
+
+- Set the debug name first: Eureka, for example
+- Then set the Main class:
+`com.webank.wedatasphere.linkis.eureka.SpringCloudEurekaApplication`
+- Finally, set the Class Path for the service. For Eureka, the classPath module is linkis-eureka
+
+![02](../Images/Tunning_and_Troubleshooting/debug-02.png)
+
+4. Click the Debug button to start the Eureka service and access the Eureka page through [http://localhost:8080/](at)
+
+![03](.. /Images/Tunning_and_Troubleshooting/debug-03.png)
+
+2.2 Other services
+
+1. The Eureka configuration of the corresponding service needs to be modified. The Application.yml file needs to be modified
+
+```
+    conf/application-linkis.yml
+```
+Change the corresponding Eureka address to the Eureka service that has been started:
+
+```
+    eureka:
+    client:
+    serviceUrl:
+    defaultZone: http://localhost:8080/eureka/
+```
+
+2. Modify the configuration related to Linkis. The general configuration file is in conf/linkis.properties, and the corresponding configuration of each module is in the properties file beginning with the module name in conf directory.
+
+3. Then add debugging service
+
+The Main Class is uniformly set to its own Main Class for each module, which is listed in the foreword.
+The Class Path of the service is the corresponding module:
+
+```
+linkis-cg-manager: linkis-application-manager
+linkis-ps-bml: linkis-bml
+linkis-ps-cs: `com.webank.wedatasphere.linkis.cs.server.LinkisCSApplication`
+linkis-cg-engineconnmanager: linkis-cs-server
+linkis-cg-engineplugin: linkis-engineconn-plugin-server
+linkis-cg-entrance: linkis-entrance
+linkis-ps-publicservice: linkis-jobhistory
+linkis-ps-datasource: linkis-metadata
+linkis-mg-gateway: linkis-spring-cloud-gateway
+```
+
+And check provide:
+
+![06](../Images/Tunning_and_Troubleshooting/debug-06.png)
+
+4. Then start the service and you can see that the service is registered on the Eureka page:
+
+![05](../Images/Tunning_and_Troubleshooting/debug-05.png)
+
+Linkis-PS-PublicService should add a public-module Module to the POM.
+
+```
+<dependency>
+    <groupId>com.webank.wedatasphere.linkis</groupId>
+    <artifactId>public-module</artifactId>
+    <version>${linkis.version}</version>
+</dependency>
+```
+
+## 3. Steps of remote debugging service
+&nbsp; &nbsp; &nbsp; &nbsp; Each service supports remote debugging, but you need to turn it on ahead of time. There are two types of remote debugging, one is the remote debugging of Linkis common service, and the other is the remote debugging of EngineConn, which are described as follows:
+
+1. Remote debugging of common service:
+
+A. First, modify the startup script file of the corresponding service under sbin/ext directory, and add debug port:
+
+```
+export $SERVER_JAVA_OPTS =" -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=10092"
+```
+
+Added: '-agentlib: JDWP = Transport = DT_Socket, Server = Y, Suspend = N, Address =10092' where ports may conflict and can be changed to available ports.
+
+B. Create a new remote debug in IDEA. Select Remote first, then add host and port for the service, and then select the debug module
+
+![07](../Images/Tunning_and_Troubleshooting/debug-07.png)
+
+3. Then click the Debug button to complete the remote debugging
+
+![08](../Images/Tunning_and_Troubleshooting/debug-08.png)
+
+2. Remote debugging of engineConn:
+
+A. Add the following configuration items to the linkis-engineconn.properties file corresponding to EngineConn
+```
+wds.linkis.engineconn.debug.enable=true
+```
+
+This configuration item will randomly assign a debug port when engineConn starts.
+
+B. In the first line of the engineConn log, the actual assigned port is printed.
+```
+      Listening for transport dt_socket at address: 26072
+```
+
+C. Create a new remote debug in IDEA. The steps have been described in the previous section and will not be repeated here.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Development_Documents/New_EngineConn_Development.md b/Linkis-Doc-master/en_US/Development_Documents/New_EngineConn_Development.md
new file mode 100644
index 0000000..d45eedd
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Development_Documents/New_EngineConn_Development.md
@@ -0,0 +1,77 @@
+## How To Quickly Implement A New Engine
+
+To implement a new engine is to implement a new "EngineConnPlugin(ECP)" means engine plugin. Specific steps are as follows: 
+
+1.Create a new maven module and introduce the maven dependency of "ECP":
+```
+<dependency>
+<groupId>com.webank.wedatasphere.linkis</groupId>
+<artifactId>linkis-engineconn-plugin-core</artifactId>
+<version>${linkis.version}</version>
+</dependency>
+```
+2.The main interfaces of implementing "ECP":
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;a)EngineConnPlugin, when starting "EngineConn", first find the corresponding "EngineConnPlugin" class, and use this as the entry point to obtain the implementation of other core interfaces, which is the main interface that must be implemented.
+    
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b)EngineConnFactory, which implements the logic of how to start an engine connector and how to start an engine executor, is an interface that must be implemented.
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b.a Implement the "createEngineConn" method: return an "EngineConn" object, where "getEngine" returns an object that encapsulates the connection information with the underlying engine, and also contains Engine type information.
+    
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b.b For engines that only support a single computing scenario, inherit "SingleExecutorEngineConnFactory" class and implement "createExecutor" method which returns the corresponding Executor.
+    
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b.c For engines that support multiple computing scenarios, you need to inherit "MultiExecutorEngineConnFactory" and implement an ExecutorFactory for each computing type. "EngineConnPlugin" will obtain all ExecutorFactory through reflection and return the corresponding Executor according to the actual situation.
+    
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;c)EngineConnResourceFactory, it is used to limit the resources required to start an engine. Before the engine starts, it will use this as the basis to apply for resources from the "Linkis Manager". Not required, "GenericEngineResourceFactory" can be used by default.
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;d)EngineLaunchBuilder, it is used to encapsulate the necessary information that "EngineConnManager" can parse into the startup command. Not necessary, you can directly inherit "JavaProcessEngineConnLaunchBuilder".
+
+3.Implement Executor. As a real computing scene executor, Executor is the actual computing logic execution unit. It also abstracts various specific capabilities of the engine and provides various services such as locking, accessing status and obtaining logs. According to actual needs, Linkis provides the following derived Executor base classes by default. The class names and main functions are as follows:
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;a) SensibleExecutor: 
+       
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; i. Executor has multiple states, allowing Executor to switch states.
+         
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ii. After the Executor switches the state, operations such as notifications are allowed. 
+         
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;b) YarnExecutor: refers to the Yarn type engine, which can obtain the "applicationId", "applicationURL" and queue。
+       
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;c) ResourceExecutor: refers to the engine's ability to dynamically change resources and cooperate with the "requestExpectedResource" method to apply to RM for new resources each time you want to change resources; And the "resourceUpdate" method is used to request new resources from RM each time the actual resource used by the engine changes:
+       
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;d) AccessibleExecutor: is a very important Executor base class. If the user's Executor inherits the base class, it means that the Engine can be accessed. Here we need to distinguish between "SensibleExecutor"'s "state" method and "AccessibleExecutor"'s "getEngineStatus" method. "state" method is used to get the engine status, and "getEngineStatus" is used to get the basic indicator metric data such as engine status, load and concurrency.
+       
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;e) At the same time, if AccessibleExecutor is inherited, it will trigger the Engine process to instantiate multiple "EngineReceiver" methods. "EngineReceiver" is used to process RPC requests from Entrance, EM and "LinkisMaster", marking the engine an accessible engine. If users have special RPC requirements, they can communicate with "AccessibleExecutor" by implementing the "RPCService" interface. 
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;f) ExecutableExecutor: it is a resident Executor base class. The resident Executor includes: Streaming applications in the production center, steps specified to run in independent mode after submission to "Schedulis", business applications of business users, etc.
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;g) StreamingExecutor: inherited from "ExecutiveExecutor", it needs the ability to diagnose, do checkpoint, collect job information and monitor alarms.
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;h) ComputationExecutor: it is a commonly used interactive engine Executor which handles interactive execution tasks and has interactive capabilities such as status query ad task killing.
+
+             
+## Actual Case         
+The following will take the Hive engine as case to illustrate the implementation of each interface. The following figure is what is needed to implement a Hive engine All core classes implemented.
+
+Hive engine is an interactive engine, so when implementing Executor, it inherits "ComputationExecutor" and introduces the following maven dependencies: 
+
+``` 
+<dependency>
+<groupId>com.webank.wedatasphere.linkis</groupId>
+<artifactId>linkis-computation-engineconn</artifactId>
+<version>${linkis.version}</version>
+</dependency>
+```
+             
+As a subclass of "ComputationExecutor", "HiveEngineConnExecutor" implements the "executeLine" method. This method receives a line of execution statements. After calling the Hive interface for execution, it returns different "ExecuteResponse" to indicate success or failure. At the same time, in this method, through the interface provided in the "engineExecutorContext", the result set, log and progress transmission are realized. 
+
+The Hive engine only needs to execute the HQL Executor, which is a single executor engine. Therefore, when defining "HiveEngineConnFactory", it inherits "SingleExecutorEngineConnFactory" which implements the following two interfaces: 
+a) createEngineConn: creates a object that contains "UserGroupInformation", "SessionState" adn "HiveConf" as an encapsulation of the connection information with the underlying engine, set to the EngineConn object to return.
+b) createExecutor: creates a "HiveEngineConnExecutor" executor object based on the current engine connection information.
+
+Hive engine is an ordinary Java process, so when implementing "EngineConnLaunchBuilder", it directly inherits "JavaProcessEngineConnLaunchBuilder". Like memory size, Java parameters and classPath, it can be adjusted through configuration, please refer to "EnvConfiguration" class for details.
+
+Hive engine uses "LoadInstanceResource resources", so there is no need to implement "EngineResourceFactory", directly use the default "GenericEngineResourceFactory", adjust the number of resources through configuration, refer to "EngineConnPluginConf" class for details.
+
+Implement "HiveEngineConnPlugin" and provide methods for creating the above implementation classes.
+
+
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Hive_User_Manual.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Hive_User_Manual.md
new file mode 100644
index 0000000..8262706
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Hive_User_Manual.md
@@ -0,0 +1,81 @@
+# Hive engine usage documentation
+
+This article mainly introduces the configuration, deployment and use of Hive engine in Linkis1.0.
+
+## 1. Environment configuration before Hive engine use
+
+If you want to use the hive engine on your server, you need to ensure that the following environment variables have been set correctly and that the user who started the engine has these environment variables.
+
+It is strongly recommended that you check these environment variables of the executing user before executing hive tasks.
+
+| Environment variable name | Environment variable content | Remarks |
+|-----------------|----------------|------|
+| JAVA_HOME | JDK installation path | Required |
+| HADOOP_HOME | Hadoop installation path | Required |
+| HADOOP_CONF_DIR | Hadoop configuration path | Required |
+| HIVE_CONF_DIR | Hive configuration path | Required |
+
+Table 1-1 Environmental configuration list
+
+## 2. Hive engine configuration and deployment
+
+### 2.1 Hive version selection and compilation
+
+The version of Hive supports hive1.x and hive2.x, the default is to support hive on MapReduce, if you want to change to Hive
+on Tez, you need to make some changes in accordance with this pr.
+
+<https://github.com/WeBankFinTech/Linkis/pull/541>
+
+The hive version supported by default is 1.2.1. If you want to modify the hive version, such as 2.3.3, you can find the linkis-engineplugin-hive module and change the \<hive.version\> tag to 2.3 .3, then compile this module separately
+
+### 2.2 hive engineConn deployment and loading
+
+If you have already compiled your hive engine plug-in has been compiled, then you need to put the new plug-in in the specified location to load, you can refer to the following article for details
+
+https://github.com/WeBankFinTech/Linkis/wiki/EngineConnPlugin%E5%BC%95%E6%93%8E%E6%8F%92%E4%BB%B6%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3
+
+### 2.3 Hive engine tags
+
+Linkis1.0 is done through tags, so we need to insert data in our database, the way of inserting is shown below.
+
+https://github.com/WeBankFinTech/Linkis/wiki/EngineConnPlugin%E5%BC%95%E6%93%8E%E6%8F%92%E4%BB%B6%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3#22-%E7%AE%A1%E7%90%86%E5%8F%B0configuration%E9%85%8D%E7%BD%AE%E4%BF%AE%E6%94%B9%E5%8F%AF%E9%80%89
+
+## 3. Use of hive engine
+
+### Preparation for operation, queue setting
+
+Hive's MapReduce task requires yarn resources, so you need to set up the queue at the beginning
+
+![](../Images/EngineUsage/queue-set.png)
+
+Figure 3-1 Queue settings
+
+### 3.1 How to use Scriptis
+
+The use of Scriptis is the simplest. You can directly enter Scriptis, right-click the directory and create a new hive script and write hivesql code.
+
+The implementation of the hive engine is by instantiating the driver instance of hive, and then the driver submits the task, and obtains the result set and displays it.
+
+![](../Images/EngineUsage/hive-run.png)
+
+Figure 3-2 Screenshot of the execution effect of hivesql
+
+### 3.2 How to use workflow
+
+DSS workflow also has a hive node, you can drag in the workflow node, then double-click to enter and edit the code, and then execute it in the form of a workflow.
+
+![](../Images/EngineUsage/workflow.png)
+
+Figure 3-5 The node where the workflow executes hive
+
+### 3.3 How to use Linkis Client
+
+Linkis also provides a client method to call hive tasks. The call method is through the SDK provided by LinkisClient. We provide java and scala two ways to call, the specific usage can refer to <https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4 %BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>.
+
+## 4. Hive engine user settings
+
+In addition to the above engine configuration, users can also make custom settings, including the memory size of the hive Driver process, etc.
+
+![](../Images/EngineUsage/hive-config.png)
+
+Figure 4-1 User-defined configuration management console of hive
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/JDBC_User_Manual.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/JDBC_User_Manual.md
new file mode 100644
index 0000000..35f3d7b
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/JDBC_User_Manual.md
@@ -0,0 +1,53 @@
+# JDBC engine usage documentation
+
+This article mainly introduces the configuration, deployment and use of JDBC engine in Linkis1.0.
+
+## 1. Environment configuration before using the JDBC engine
+
+If you want to use the JDBC engine on your server, you need to prepare the JDBC connection information, such as the connection address, user name and password of the MySQL database, etc.
+
+## 2. JDBC engine configuration and deployment
+
+### 2.1 JDBC version selection and compilation
+
+The JDBC engine does not need to be compiled by the user, and the compiled JDBC engine plug-in package can be used directly. Drivers that have been provided include MySQL, PostgreSQL, etc.
+
+### 2.2 JDBC engineConn deployment and loading
+
+Here you can use the default loading method to use it normally, just install it according to the standard version.
+
+### 2.3 JDBC engine tags
+
+Here you can use the default dml.sql to insert it and it can be used normally.
+
+## 3. The use of JDBC engine
+
+### Ready to operate
+
+You need to configure JDBC connection information, including connection address information and user name and password.
+
+![](../Images/EngineUsage/jdbc-conf.png)
+
+Figure 3-1 JDBC configuration information
+
+### 3.1 How to use Scriptis
+
+The way to use Scriptis is the simplest. You can go directly to Scriptis, right-click the directory and create a new JDBC script, write JDBC code and click Execute.
+
+The execution principle of JDBC is to load the JDBC Driver and submit sql to the SQL server for execution and obtain the result set and return.
+
+![](../Images/EngineUsage/jdbc-run.png)
+
+Figure 3-2 Screenshot of the execution effect of JDBC
+
+### 3.2 How to use workflow
+
+DSS workflow also has a JDBC node, you can drag into the workflow node, then double-click to enter and edit the code, and then execute it in the form of a workflow.
+
+### 3.3 How to use Linkis Client
+
+Linkis also provides a client way to call JDBC tasks, the way to call is through the SDK provided by LinkisClient. We provide java and scala two ways to call, the specific usage can refer to <https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4 %BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>.
+
+## 4. JDBC engine user settings
+
+JDBC user settings are mainly JDBC connection information, but it is recommended that users encrypt and manage this password and other information.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Python_User_Manual.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Python_User_Manual.md
new file mode 100644
index 0000000..64724e9
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Python_User_Manual.md
@@ -0,0 +1,61 @@
+# Python engine usage documentation
+
+This article mainly introduces the configuration, deployment and use of the Python engine in Linkis1.0.
+
+## 1. Environment configuration before using Python engine
+
+If you want to use the python engine on your server, you need to ensure that the python execution directory and execution permissions are in the user's PATH.
+
+| Environment variable name | Environment variable content | Remarks |
+|------------|-----------------|--------------------------------|
+| python | python execution environment | Anaconda's python executor is recommended |
+
+Table 1-1 Environmental configuration list
+
+## 2. Python engine configuration and deployment
+
+### 2.1 Python version selection and compilation
+
+Python supports python2 and
+For python3, you can simply change the configuration to complete the Python version switch, without recompiling the python engine version.
+
+### 2.2 python engineConn deployment and loading
+
+Here you can use the default loading method to be used normally.
+
+### 2.3 tags of python engine
+
+Here you can use the default dml.sql to insert it and it can be used normally.
+
+## 3. Use of Python engine
+
+### Ready to operate
+
+Before submitting python on linkis, you only need to make sure that there is python path in your user's PATH.
+
+### 3.1 How to use Scriptis
+
+The way to use Scriptis is the simplest. You can directly enter Scriptis, right-click the directory and create a new python script, write python code and click Execute.
+
+The execution logic of python is to start a python through Py4j
+Gateway, and then the Python engine submits the code to the python executor for execution.
+
+![](../Images/EngineUsage/python-run.png)
+
+Figure 3-1 Screenshot of the execution effect of python
+
+### 3.2 How to use workflow
+
+The DSS workflow also has a python node, you can drag into the workflow node, then double-click to enter and edit the code, and then execute it in the form of a workflow.
+
+### 3.3 How to use Linkis Client
+
+Linkis also provides a client method to call spark tasks, the call method is through the SDK provided by LinkisClient. We provide java and scala two ways to call, the specific usage can refer to <https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4 %BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>.
+
+## 4. Python engine user settings
+
+In addition to the above engine configuration, users can also make custom settings, such as the version of python and some modules that python needs to load.
+
+![](../Images/EngineUsage/jdbc-conf.png)
+
+Figure 4-1 User-defined configuration management console of python
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/README.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/README.md
new file mode 100644
index 0000000..cb9e5ef
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/README.md
@@ -0,0 +1,25 @@
+## 1 Overview
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis, as a powerful computing middleware, can easily interface with different computing engines. By shielding the usage details of different computing engines, it provides a The unified use interface greatly reduces the operation and maintenance cost of deploying and applying Linkis's big data platform. At present, Linkis has docked several mainstream computing engines, which basically cover the data requirements in production, in order t [...]
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The engine is a component that provides users with data processing and analysis capabilities. Currently, it has been connected to Linkis's engine, including mainstream big data computing engines Spark, Hive, Presto, etc. , There are also engines with the ability to process data in scripts such as python and Shell. DataSphereStudio is a one-stop data operation platform docked with Linkis. Users can conveniently use the engine supported by Li [...]
+
+| Engine | Whether to support Scriptis | Whether to support workflow |
+| ---- | ---- | ---- |
+| Spark | Support | Support |
+| Hive | Support | Support |
+| Presto | Support | Support |
+| ElasticSearch | Support | Support |
+| python | support | support |
+| Shell | Support | Support |
+| JDBC | Support | Support |
+| MySQL | Support | Support |
+
+## 2. Document structure
+You can refer to the following documents for the related documents of the engines that have been accessed.
+-[Spark Engine Usage Document](./../Engine_Usage_Documentations/Spark_User_Manual.md)
+-[Hive Engine Usage Document](./../Engine_Usage_Documentations/Hive_User_Manual.md)
+-[Presto Engine Usage Document](./../Engine_Usage_Documentations/Presto_User_Manual.md)
+-[ElasticSearch Engine Usage Document](./../Engine_Usage_Documentations/ElasticSearch_User_Manual.md)
+-[Python engine usage documentation](./../Engine_Usage_Documentations/Python_User_Manual.md)
+-[Shell Engine Usage Document](./../Engine_Usage_Documentations/Shell_User_Manual.md)
+-[JDBC Engine Usage Document](./../Engine_Usage_Documentations/JDBC_User_Manual.md)
+-[MLSQL Engine Usage Document](./../Engine_Usage_Documentations/MLSQL_User_Manual.md)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Shell_User_Manual.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Shell_User_Manual.md
new file mode 100644
index 0000000..292d2c4
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Shell_User_Manual.md
@@ -0,0 +1,55 @@
+# Shell engine usage document
+
+This article mainly introduces the configuration, deployment and use of Shell engine in Linkis1.0
+## 1. The environment configuration before using the Shell engine
+
+If you want to use the shell engine on your server, you need to ensure that the user's PATH has the bash execution directory and execution permissions.
+
+| Environment variable name | Environment variable content | Remarks             |
+|---------------------------|------------------------------|---------------------|
+| sh execution environment  | bash environment variables    | bash is recommended |
+
+Table 1-1 Environmental configuration list
+
+## 2. Shell engine configuration and deployment
+
+### 2.1 Shell version selection and compilation
+
+The shell engine does not need to be compiled by the user, and the compiled shell engine plug-in package can be used directly.
+### 2.2 shell engineConn deployment and loading
+
+Here you can use the default loading method to be used normally.
+
+### 2.3 Labels of the shell engine
+
+Here you can use the default dml.sql to insert it and it can be used normally.
+
+## 3. Use of Shell Engine
+
+### Ready to operate
+
+Before submitting the shell on linkis, you only need to ensure that there is the path of the shell in your user's $PATH.
+
+### 3.1 How to use Scriptis
+
+The use of Scriptis is the simplest. You can directly enter Scriptis, right-click the directory and create a new shell script, write shell code and click Execute.
+
+The execution principle of the shell is that the shell engine starts a system process to execute through the ProcessBuilder that comes with java, and redirects the output of the process to the engine and writes it to the log.
+
+![](../Images/EngineUsage/shell-run.png)
+
+Figure 3-1 Screenshot of shell execution effect
+
+### 3.2 How to use workflow
+
+The DSS workflow also has a shell node. You can drag in the workflow node, then double-click to enter and edit the code, and then execute it in the form of a workflow.
+
+Shell execution needs to pay attention to one point. If the workflow is executed in multiple lines, the success of the workflow node is determined by the last command. For example, the first two lines are wrong, but the shell return value of the last line is 0, then this node Is successful.
+
+### 3.3 How to use Linkis Client
+
+Linkis also provides a client method to call the shell task, the calling method is through the SDK provided by LinkisClient. We provide java and scala two ways to call, the specific usage can refer to <https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4 %BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>.
+
+## 4. Shell engine user settings
+
+The shell engine can generally set the maximum memory of the engine JVM.
diff --git a/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Spark_User_Manual.md b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Spark_User_Manual.md
new file mode 100644
index 0000000..9932184
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Engine_Usage_Documentations/Spark_User_Manual.md
@@ -0,0 +1,91 @@
+# Spark engine usage documentation
+
+This article mainly introduces the configuration, deployment and use of spark engine in Linkis1.0.
+
+## 1. Environment configuration before using Spark engine
+
+If you want to use the spark engine on your server, you need to ensure that the following environment variables have been set correctly and that the user who started the engine has these environment variables.
+
+It is strongly recommended that you check these environment variables of the executing user before executing spark tasks.
+
+| Environment variable name | Environment variable content | Remarks |
+|---------------------------|------------------------------|------|
+| JAVA_HOME | JDK installation path | Required |
+| HADOOP_HOME | Hadoop installation path | Required |
+| HADOOP_CONF_DIR | Hadoop configuration path | Required |
+| HIVE\_CONF_DIR | Hive configuration path | Required |
+| SPARK_HOME | Spark installation path | Required |
+| SPARK_CONF_DIR | Spark configuration path | Required |
+| python | python | Anaconda's python is recommended as the default python |
+
+Table 1-1 Environmental configuration list
+
+## 2. Configuration and deployment of Spark engine
+
+### 2.1 Selection and compilation of spark version
+
+In theory, Linkis1.0 supports all versions of spark2.x and above. Spark 2.4.3 is the default supported version. If you want to use your spark version, such as spark2.1.0, you only need to modify the version of the plug-in spark and then compile it. Specifically, you can find the linkis-engineplugin-spark module, change the \<spark.version\> tag to 2.1.0, and then compile this module separately.
+
+### 2.2 spark engineConn deployment and loading
+
+If you have already compiled your spark engine plug-in has been compiled, then you need to put the new plug-in to the specified location to load, you can refer to the following article for details
+
+https://github.com/WeBankFinTech/Linkis/wiki/EngineConnPlugin%E5%BC%95%E6%93%8E%E6%8F%92%E4%BB%B6%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3
+
+### 2.3 tags of spark engine
+
+Linkis1.0 is done through tags, so we need to insert data in our database, the way of inserting is shown below.
+
+https://github.com/WeBankFinTech/Linkis/wiki/EngineConnPlugin%E5%BC%95%E6%93%8E%E6%8F%92%E4%BB%B6%E5%AE%89%E8%A3%85%E6%96%87%E6%A1%A3\#22-%E7%AE%A1%E7%90%86%E5%8F%B0configuration%E9%85%8D%E7%BD%AE%E4%BF%AE%E6%94%B9%E5%8F%AF%E9%80%89
+
+## 3. Use of spark engine
+
+### Preparation for operation, queue setting
+
+Because the execution of spark is a resource that requires a queue, the user must set up a queue that he can execute before executing.
+
+![](../Images/EngineUsage/queue-set.png)
+
+Figure 3-1 Queue settings
+
+### 3.1 How to use Scriptis
+
+The use of Scriptis is the simplest. You can directly enter Scriptis and create a new sql, scala or pyspark script for execution.
+
+The sql method is the simplest. You can create a new sql script and write and execute it. When it is executed, the progress will be displayed. If the user does not have a spark engine at the beginning, the execution of sql will start a spark session (it may take some time here),
+After the SparkSession is initialized, you can start to execute sql.
+
+![](../Images/EngineUsage/sparksql-run.png)
+
+Figure 3-2 Screenshot of the execution effect of sparksql
+
+For spark-scala tasks, we have initialized sqlContext and other variables, and users can directly use this sqlContext to execute sql.
+
+![](../Images/EngineUsage/scala-run.png)
+
+Figure 3-3 Execution effect diagram of spark-scala
+
+Similarly, in the way of pyspark, we have also initialized the SparkSession, and users can directly use spark.sql to execute SQL.
+
+![](../Images/EngineUsage/pyspakr-run.png)
+Figure 3-4 pyspark execution mode
+
+### 3.2 How to use workflow
+
+DSS workflow also has three spark nodes. You can drag in workflow nodes, such as sql, scala or pyspark nodes, and then double-click to enter and edit the code, and then execute in the form of workflow.
+
+![](../Images/EngineUsage/workflow.png)
+
+Figure 3-5 The node where the workflow executes spark
+
+### 3.3 How to use Linkis Client
+
+Linkis also provides a client method to call spark tasks, the call method is through the SDK provided by LinkisClient. We provide java and scala two ways to call, the specific usage can refer to <https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4 %BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>.
+
+## 4. Spark engine user settings
+
+In addition to the above engine configuration, users can also make custom settings, such as the number of spark session executors and the memory of the executors. These parameters are for users to set their own spark parameters more freely, and other spark parameters can also be modified, such as the python version of pyspark.
+
+![](../Images/EngineUsage/spark-conf.png)
+
+Figure 4-1 Spark user-defined configuration management console
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png b/Linkis-Doc-master/en_US/Images/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png
new file mode 100644
index 0000000..2e71b42
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Add_an_EngineConn/add_an_EngineConn_flow_chart.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/EngineConn/engineconn-01.png b/Linkis-Doc-master/en_US/Images/Architecture/EngineConn/engineconn-01.png
new file mode 100644
index 0000000..d95da89
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/EngineConn/engineconn-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_dispatcher.png b/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_dispatcher.png
new file mode 100644
index 0000000..9cdc918
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_dispatcher.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_global.png b/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_global.png
new file mode 100644
index 0000000..584574e
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gateway_server_global.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gatway_websocket.png b/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gatway_websocket.png
new file mode 100644
index 0000000..fcac318
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Gateway/gatway_websocket.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/execution.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/execution.png
new file mode 100644
index 0000000..1abc43b
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/execution.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png
new file mode 100644
index 0000000..9de0a5d
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/orchestrate.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/overall.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/overall.png
new file mode 100644
index 0000000..68b5e19
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/overall.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png
new file mode 100644
index 0000000..7998704
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/physical_tree.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png
new file mode 100644
index 0000000..c2dd9f3
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/result_acquisition.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/submission.png b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/submission.png
new file mode 100644
index 0000000..f6bd9a9
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Job_submission_preparation_and_execution_process/submission.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_builder.png b/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_builder.png
new file mode 100644
index 0000000..4896981
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_builder.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_global.png b/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_global.png
new file mode 100644
index 0000000..ca4151a
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_global.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_scorer.png b/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_scorer.png
new file mode 100644
index 0000000..7213b0b
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/LabelManager/label_manager_scorer.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-NewEngine-architecture.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-NewEngine-architecture.png
new file mode 100644
index 0000000..57c83b3
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-NewEngine-architecture.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-services-list.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-services-list.png
new file mode 100644
index 0000000..c669abf
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Linkis0.X-services-list.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-EngineConn-architecture.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-EngineConn-architecture.png
new file mode 100644
index 0000000..d95da89
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-EngineConn-architecture.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-NewEngine-architecture.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-NewEngine-architecture.png
new file mode 100644
index 0000000..b1d60bf
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-NewEngine-architecture.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-architecture.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-architecture.png
new file mode 100644
index 0000000..825672b
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-architecture.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-newEngine-initialization.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-newEngine-initialization.png
new file mode 100644
index 0000000..003b38e
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-newEngine-initialization.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-services-list.png b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-services-list.png
new file mode 100644
index 0000000..f768545
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Linkis1.0-services-list.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/PublicEnhencementArchitecture.png b/Linkis-Doc-master/en_US/Images/Architecture/PublicEnhencementArchitecture.png
new file mode 100644
index 0000000..bcf72a5
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/PublicEnhencementArchitecture.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png
new file mode 100644
index 0000000..f61c49a
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png
new file mode 100644
index 0000000..a2e1022
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png
new file mode 100644
index 0000000..5f4272f
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png
new file mode 100644
index 0000000..9bb177a
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png
new file mode 100644
index 0000000..00d1f4a
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png
new file mode 100644
index 0000000..439c8e2
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png
new file mode 100644
index 0000000..081d514
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png
new file mode 100644
index 0000000..e343579
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png
new file mode 100644
index 0000000..012eb65
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png
new file mode 100644
index 0000000..c3a43b9
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png
new file mode 100644
index 0000000..719599a
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png
new file mode 100644
index 0000000..2277a70
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png
new file mode 100644
index 0000000..df58d96
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png
new file mode 100644
index 0000000..1e13445
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png
new file mode 100644
index 0000000..7e410fb
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png
new file mode 100644
index 0000000..097b7f1
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png
new file mode 100644
index 0000000..7a4d462
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png
new file mode 100644
index 0000000..fdd6623
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png
new file mode 100644
index 0000000..b366462
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png
new file mode 100644
index 0000000..2a1e403
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png
new file mode 100644
index 0000000..32336eb
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png
new file mode 100644
index 0000000..fdb60fc
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png
new file mode 100644
index 0000000..45dcc43
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png
new file mode 100644
index 0000000..2175704
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png
new file mode 100644
index 0000000..9d357af
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png
new file mode 100644
index 0000000..b08efd3
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png
new file mode 100644
index 0000000..13ca37e
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png
new file mode 100644
index 0000000..36a4d96
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png
new file mode 100644
index 0000000..0a5ae1d
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/bml-02.png b/Linkis-Doc-master/en_US/Images/Architecture/bml-02.png
new file mode 100644
index 0000000..fed79f7
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/bml-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-engineConnPlugin-01.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-engineConnPlugin-01.png
new file mode 100644
index 0000000..2d2d134
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/linkis-engineConnPlugin-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-01.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-01.png
new file mode 100644
index 0000000..60b575d
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-02.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-02.png
new file mode 100644
index 0000000..a31e681
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/linkis-intro-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-01.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-01.png
new file mode 100644
index 0000000..ac46424
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-03.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-03.png
new file mode 100644
index 0000000..b53c8e1
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/linkis-microservice-gov-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Architecture/linkis-publicService-01.png b/Linkis-Doc-master/en_US/Images/Architecture/linkis-publicService-01.png
new file mode 100644
index 0000000..d503573
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Architecture/linkis-publicService-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/hive-config.png b/Linkis-Doc-master/en_US/Images/EngineUsage/hive-config.png
new file mode 100644
index 0000000..9b3df01
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/hive-config.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/hive-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/hive-run.png
new file mode 100644
index 0000000..287b1ab
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/hive-run.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-conf.png b/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-conf.png
new file mode 100644
index 0000000..39397d3
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-conf.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-run.png
new file mode 100644
index 0000000..fe51598
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/jdbc-run.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/pyspakr-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/pyspakr-run.png
new file mode 100644
index 0000000..c80c85b
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/pyspakr-run.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/python-config.png b/Linkis-Doc-master/en_US/Images/EngineUsage/python-config.png
new file mode 100644
index 0000000..2bf1791
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/python-config.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/python-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/python-run.png
new file mode 100644
index 0000000..65467af
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/python-run.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/queue-set.png b/Linkis-Doc-master/en_US/Images/EngineUsage/queue-set.png
new file mode 100644
index 0000000..735a670
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/queue-set.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/scala-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/scala-run.png
new file mode 100644
index 0000000..7c01aad
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/scala-run.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/shell-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/shell-run.png
new file mode 100644
index 0000000..734bdb2
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/shell-run.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/spark-conf.png b/Linkis-Doc-master/en_US/Images/EngineUsage/spark-conf.png
new file mode 100644
index 0000000..353dbd6
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/spark-conf.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/sparksql-run.png b/Linkis-Doc-master/en_US/Images/EngineUsage/sparksql-run.png
new file mode 100644
index 0000000..f0b1d1b
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/sparksql-run.png differ
diff --git a/Linkis-Doc-master/en_US/Images/EngineUsage/workflow.png b/Linkis-Doc-master/en_US/Images/EngineUsage/workflow.png
new file mode 100644
index 0000000..3a5919f
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/EngineUsage/workflow.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Linkis_1.0_architecture.png b/Linkis-Doc-master/en_US/Images/Linkis_1.0_architecture.png
new file mode 100644
index 0000000..9b6cc90
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Linkis_1.0_architecture.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/Q&A.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/Q&A.png
new file mode 100644
index 0000000..121d7f3
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/Q&A.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/code-fix-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/code-fix-01.png
new file mode 100644
index 0000000..27bdddb
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/code-fix-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-01.png
new file mode 100644
index 0000000..fa1f1c8
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-02.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-02.png
new file mode 100644
index 0000000..c2f8443
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/db-config-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-01.png
new file mode 100644
index 0000000..9834b3d
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-02.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-02.png
new file mode 100644
index 0000000..c7621b5
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-03.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-03.png
new file mode 100644
index 0000000..16788c3
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-04.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-04.png
new file mode 100644
index 0000000..cb944ee
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-04.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-05.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-05.png
new file mode 100644
index 0000000..2c5972c
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-05.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-06.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-06.png
new file mode 100644
index 0000000..a64cec6
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-06.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-07.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-07.png
new file mode 100644
index 0000000..935d5bc
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-07.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-08.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-08.png
new file mode 100644
index 0000000..d2a3328
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/debug-08.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/hive-config-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/hive-config-01.png
new file mode 100644
index 0000000..6bd0edb
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/hive-config-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-01.png
new file mode 100644
index 0000000..01090d1
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-02.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-02.png
new file mode 100644
index 0000000..0f68f12
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-03.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-03.png
new file mode 100644
index 0000000..8fb4464
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-04.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-04.png
new file mode 100644
index 0000000..5635a20
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-04.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-05.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-05.png
new file mode 100644
index 0000000..c341a9d
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-05.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-06.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-06.png
new file mode 100644
index 0000000..b0624ef
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-06.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-07.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-07.png
new file mode 100644
index 0000000..402f0c9
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-07.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-08.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-08.png
new file mode 100644
index 0000000..27c1824
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-08.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-09.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-09.png
new file mode 100644
index 0000000..5b27b4b
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-09.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-10.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-10.png
new file mode 100644
index 0000000..7c361e7
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/linkis-exception-10.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-01.png
new file mode 100644
index 0000000..d953cb6
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-02.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-02.png
new file mode 100644
index 0000000..af273bb
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-03.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-03.png
new file mode 100644
index 0000000..c36bb30
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/page-show-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/searching_keywords.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/searching_keywords.png
new file mode 100644
index 0000000..cada716
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/searching_keywords.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-01.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-01.png
new file mode 100644
index 0000000..910150e
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-02.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-02.png
new file mode 100644
index 0000000..71d5e7e
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-03.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-03.png
new file mode 100644
index 0000000..4bb9cfe
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-04.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-04.png
new file mode 100644
index 0000000..c2df857
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-04.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-05.png b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-05.png
new file mode 100644
index 0000000..3635584
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tuning_and_Troubleshooting/shell-error-05.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-01.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-01.png
new file mode 100644
index 0000000..9834b3d
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-02.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-02.png
new file mode 100644
index 0000000..c7621b5
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-02.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-03.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-03.png
new file mode 100644
index 0000000..16788c3
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-03.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-04.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-04.png
new file mode 100644
index 0000000..cb944ee
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-04.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-05.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-05.png
new file mode 100644
index 0000000..2c5972c
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-05.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-06.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-06.png
new file mode 100644
index 0000000..a64cec6
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-06.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-07.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-07.png
new file mode 100644
index 0000000..935d5bc
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-07.png differ
diff --git a/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-08.png b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-08.png
new file mode 100644
index 0000000..d2a3328
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/Tunning_And_Troubleshooting/debug-08.png differ
diff --git a/Linkis-Doc-master/en_US/Images/deployment/Linkis1.0_combined_eureka.png b/Linkis-Doc-master/en_US/Images/deployment/Linkis1.0_combined_eureka.png
new file mode 100644
index 0000000..809dbee
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/deployment/Linkis1.0_combined_eureka.png differ
diff --git a/Linkis-Doc-master/en_US/Images/wedatasphere_contact_01.png b/Linkis-Doc-master/en_US/Images/wedatasphere_contact_01.png
new file mode 100644
index 0000000..5a3d80e
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/wedatasphere_contact_01.png differ
diff --git a/Linkis-Doc-master/en_US/Images/wedatasphere_stack_Linkis.png b/Linkis-Doc-master/en_US/Images/wedatasphere_stack_Linkis.png
new file mode 100644
index 0000000..36060b9
Binary files /dev/null and b/Linkis-Doc-master/en_US/Images/wedatasphere_stack_Linkis.png differ
diff --git a/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Configuration.md b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Configuration.md
new file mode 100644
index 0000000..c4652ea
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Configuration.md
@@ -0,0 +1,217 @@
+# Linkis1.0 Configurations
+
+> The configuration of Linkis1.0 is simplified on the basis of Linkis0.x. A public configuration file linkis.properties is provided in the conf directory to avoid the need for common configuration parameters to be configured in multiple microservices at the same time. This document will list the parameters of Linkis1.0 in modules.
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Please be noticed: This article only lists all the configuration parameters related to Linkis that have an impact on operating performance or environment dependence. Many configuration parameters that do not need users to care about have been omitted. If users are interested, they can browse through the source code.
+
+### 1 General configuration
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The general configuration can be set in the global linkis.properties, one setting, each microservice can take effect.
+
+#### 1.1 Global configurations
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.encoding | utf-8 | Linkis default encoding format |
+| wds.linkis.date.pattern | yyyy-MM-dd'T'HH:mm:ssZ | Default date format |
+| wds.linkis.test.mode | false | Whether to enable debugging mode, if set to true, all microservices support password-free login, and all EngineConn open remote debugging ports |
+| wds.linkis.test.user | None | When wds.linkis.test.mode=true, the default login user for password-free login |
+| wds.linkis.home | /appcom/Install/LinkisInstall | Linkis installation directory, if it does not exist, it will automatically get the value of LINKIS_HOME |
+| wds.linkis.httpclient.default.connect.timeOut | 50000 | Linkis HttpClient default connection timeout |
+
+#### 1.2 LDAP configurations
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.ldap.proxy.url | None | LDAP URL address |
+| wds.linkis.ldap.proxy.baseDN | None | LDAP baseDN address |
+| wds.linkis.ldap.proxy.userNameFormat | None | |
+
+#### 1.3 Hadoop configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.hadoop.root.user | hadoop | HDFS super user |
+| wds.linkis.filesystem.hdfs.root.path | None | User's HDFS default root path |
+| wds.linkis.keytab.enable | false | Whether to enable kerberos |
+| wds.linkis.keytab.file | /appcom/keytab | Kerberos keytab path, effective only when wds.linkis.keytab.enable=true |
+| wds.linkis.keytab.host.enabled | false | |
+| wds.linkis.keytab.host | 127.0.0.1 | |
+| hadoop.config.dir | None | If not configured, it will be read from the environment variable HADOOP_CONF_DIR |
+| wds.linkis.hadoop.external.conf.dir.prefix | /appcom/config/external-conf/hadoop | hadoop additional configuration |
+
+#### 1.4 Linkis RPC configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.rpc.broadcast.thread.num | 10 | Linkis RPC broadcast thread number (**Recommended default value**) |
+| wds.linkis.ms.rpc.sync.timeout | 60000 | Linkis RPC Receiver's default processing timeout time |
+| wds.linkis.rpc.eureka.client.refresh.interval | 1s | Refresh interval of Eureka client's microservice list (**Recommended default value**) |
+| wds.linkis.rpc.eureka.client.refresh.wait.time.max | 1m | Refresh maximum waiting time (**recommended default value**) |
+| wds.linkis.rpc.receiver.asyn.consumer.thread.max | 10 | Maximum number of Receiver Consumer threads (**If there are many online users, it is recommended to increase this parameter appropriately**) |
+| wds.linkis.rpc.receiver.asyn.consumer.freeTime.max | 2m | Receiver Consumer maximum idle time |
+| wds.linkis.rpc.receiver.asyn.queue.size.max | 1000 | The maximum number of buffers in the receiver consumption queue (**If there are many online users, it is recommended to increase this parameter appropriately**) |
+| wds.linkis.rpc.sender.asyn.consumer.thread.max", 5 | Sender Consumer maximum number of threads |
+| wds.linkis.rpc.sender.asyn.consumer.freeTime.max | 2m | Sender Consumer Maximum Free Time |
+| wds.linkis.rpc.sender.asyn.queue.size.max | 300 | Sender consumption queue maximum buffer number |
+
+### 2. Calculate governance configuration parameters
+
+#### 2.1 Entrance configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.spark.engine.version | 2.4.3 | The default Spark version used when the user submits a script without specifying a version |
+| wds.linkis.hive.engine.version | 1.2.1 | The default Hive version used when the user submits a script without a specified version |
+| wds.linkis.python.engine.version | python2 | The default Python version used when the user submits a script without specifying a version |
+| wds.linkis.jdbc.engine.version | 4 | The default JDBC version used when the user submits the script without specifying the version |
+| wds.linkis.shell.engine.version | 1 | The default shell version used when the user submits a script without specifying a version |
+| wds.linkis.appconn.engine.version | v1 | The default AppConn version used when the user submits a script without a specified version |
+| wds.linkis.entrance.scheduler.maxParallelismUsers | 1000 | Maximum number of concurrent users supported by Entrance |
+| wds.linkis.entrance.job.persist.wait.max | 5m | Maximum time for Entrance to wait for JobHistory to persist a Job |
+| wds.linkis.entrance.config.log.path | None | If not configured, the value of wds.linkis.filesystem.hdfs.root.path is used by default |
+| wds.linkis.default.requestApplication.name | IDE | The default submission system when the submission system is not specified |
+| wds.linkis.default.runType | sql | The default script type when the script type is not specified |
+| wds.linkis.warn.log.exclude | org.apache,hive.ql,hive.metastore,com.netflix,com.webank.wedatasphere | Real-time WARN-level logs that are not output to the client by default |
+| wds.linkis.log.exclude | org.apache, hive.ql, hive.metastore, com.netflix, com.webank.wedatasphere, com.webank | Real-time INFO-level logs that are not output to the client by default |
+| wds.linkis.instance | 3 | User's default number of concurrent jobs per engine |
+| wds.linkis.max.ask.executor.time | 5m | Apply to LinkisManager for the maximum time available for EngineConn |
+| wds.linkis.hive.special.log.include | org.apache.hadoop.hive.ql.exec.Task | When pushing Hive logs to the client, which logs are not filtered by default |
+| wds.linkis.spark.special.log.include | com.webank.wedatasphere.linkis.engine.spark.utils.JobProgressUtil | When pushing Spark logs to the client, which logs are not filtered by default |
+| wds.linkis.entrance.shell.danger.check.enabled | false | Whether to check and block dangerous shell syntax |
+| wds.linkis.shell.danger.usage | rm,sh,find,kill,python,for,source,hdfs,hadoop,spark-sql,spark-submit,pyspark,spark-shell,hive,yarn | Shell default Dangerous grammar |
+| wds.linkis.shell.white.usage | cd,ls | Shell whitelist syntax |
+| wds.linkis.sql.default.limit | 5000 | SQL default maximum return result set rows |
+
+
+#### 2.2 EngineConn configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.engineconn.resultSet.default.store.path | hdfs:///tmp | Job result set default storage path |
+| wds.linkis.engine.resultSet.cache.max | 0k | When the size of the result set is lower than how much, EngineConn will return to Entrance without placing the disk. |
+| wds.linkis.engine.default.limit | 5000 | |
+| wds.linkis.engine.lock.expire.time | 120000 | The maximum idle time of the engine lock, that is, after Entrance applies for the lock, how long does it take to submit code to EngineConn will be released |
+| wds.linkis.engineconn.ignore.words | org.apache.spark.deploy.yarn.Client | Logs that are ignored by default when the Engine pushes logs to the Entrance side |
+| wds.linkis.engineconn.pass.words | org.apache.hadoop.hive.ql.exec.Task | The log that must be pushed by default when the Engine pushes logs to the Entrance side |
+| wds.linkis.engineconn.heartbeat.time | 3m | Default heartbeat interval from EngineConn to LinkisManager |
+| wds.linkis.engineconn.max.free.time | 1h | EngineConn's maximum free time |
+
+
+#### 2.3 EngineConnManager configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.ecm.memory.max | 80g | ECM's maximum bootable EngineConn memory |
+| wds.linkis.ecm.cores.max | 50 | ECM's maximum number of CPUs that can start EngineConn |
+| wds.linkis.ecm.engineconn.instances.max | 50 | The maximum number of EngineConn that can be started, it is generally recommended to set the same as wds.linkis.ecm.cores.max |
+| wds.linkis.ecm.protected.memory | 4g | ECM protected memory, that is, the memory used by ECM to start EngineConn cannot exceed wds.linkis.ecm.memory.max-wds.linkis.ecm.protected.memory |
+| wds.linkis.ecm.protected.cores.max | 2 | The number of protected CPUs of ECM, the meaning is the same as wds.linkis.ecm.protected.memory |
+| wds.linkis.ecm.protected.engine.instances | 2 | Number of protected instances of ECM |
+| wds.linkis.engineconn.wait.callback.pid | 3s | Waiting time for EngineConn to return pid |
+
+#### 2.4 LinkisManager configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.manager.am.engine.start.max.time" | 10m | The maximum start time for LinkisManager to start a new EngineConn |
+| wds.linkis.manager.am.engine.reuse.max.time | 5m | LinkisManager reuses an existing EngineConn's maximum selection time |
+| wds.linkis.manager.am.engine.reuse.count.limit | 10 | LinkisManager reuses an existing EngineConn's maximum polling times |
+| wds.linkis.multi.user.engine.types | jdbc,es,presto | When LinkisManager reuses an existing EngineConn, which engine users are not used as reuse rules |
+| wds.linkis.rm.instance | 10 | The default maximum number of instances per user per engine |
+| wds.linkis.rm.yarnqueue.cores.max | 150 | Maximum number of cores per user in each engine usage queue |
+| wds.linkis.rm.yarnqueue.memory.max | 450g | The maximum amount of memory per user in each engine's use queue |
+| wds.linkis.rm.yarnqueue.instance.max | 30 | The maximum number of applications launched by each user in the queue of each engine |
+
+### 3. Each engine configuration parameter
+
+#### 3.1 JDBC engine configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.jdbc.default.limit | 5000 | The default maximum return result set rows |
+| wds.linkis.jdbc.support.dbs | mysql=>com.mysql.jdbc.Driver,postgresql=>org.postgresql.Driver,oracle=>oracle.jdbc.driver.OracleDriver,hive2=>org.apache.hive .jdbc.HiveDriver,presto=>com.facebook.presto.jdbc.PrestoDriver | Drivers supported by JDBC engine |
+| wds.linkis.engineconn.jdbc.concurrent.limit | 100 | Maximum number of concurrent SQL executions |
+
+
+#### 3.2 Python engine configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| pythonVersion | /appcom/Install/anaconda3/bin/python | Python command path |
+| python.path | None | Specify an additional path for Python, which only accepts shared storage paths |
+
+#### 3.3 Spark engine configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.engine.spark.language-repl.init.time | 30s | Maximum initialization time for Scala and Python command interpreters |
+| PYSPARK_DRIVER_PYTHON | python | Python command path |
+| wds.linkis.server.spark-submit | spark-submit | spark-submit command path |
+
+### 4. PublicEnhancements configuration parameters
+
+#### 4.1 BML configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.bml.dws.version | v1 | Version number requested by Linkis Restful |
+| wds.linkis.bml.auth.token.key | Validation-Code | Password-free token-key for BML request |
+| wds.linkis.bml.auth.token.value | BML-AUTH | Password-free token-value requested by BML |
+| wds.linkis.bml.hdfs.prefix | /tmp/linkis | The prefix file path of the BML file stored on hdfs |
+ 
+#### 4.2 Metadata configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| hadoop.config.dir | /appcom/config/hadoop-config | If it does not exist, the value of the environment variable HADOOP_CONF_DIR is used by default |
+| hive.config.dir | /appcom/config/hive-config | If it does not exist, the value of the environment variable HIVE_CONF_DIR is used by default |
+| hive.meta.url | None | The URL of the HiveMetaStore database. If hive.config.dir is not configured, this value must be configured |
+| hive.meta.user | None | User of the HiveMetaStore database |
+| hive.meta.password | None | Password of the HiveMetaStore database |
+
+
+#### 4.3 JobHistory configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.jobhistory.admin | None | The default Admin account is used to specify which users can view the execution history of everyone |
+
+
+#### 4.4 FileSystem configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.filesystem.root.path | file:///tmp/linkis/ | User's Linux local root directory |
+| wds.linkis.filesystem.hdfs.root.path | hdfs:///tmp/ | User's HDFS root directory |
+| wds.linkis.workspace.filesystem.hdfsuserrootpath.suffix | /linkis/ | The first-level prefix after the user's HDFS root directory. The user's actual root directory is: ${hdfs.root.path}\${user}\${ hdfsuserrootpath.suffix} |
+| wds.linkis.workspace.resultset.download.is.limit | true | When Client downloads the result set, whether to limit the number of downloads |
+| wds.linkis.workspace.resultset.download.maxsize.csv | 5000 | When the result set is downloaded as a CSV file, the number of downloads is limited |
+| wds.linkis.workspace.resultset.download.maxsize.excel | 5000 | When the result set is downloaded as an Excel file, the number of downloads is limited |
+| wds.linkis.workspace.filesystem.get.timeout | 2000L | The maximum timeout period for requesting the underlying file system. (**If the performance of your HDFS or Linux machine is low, it is recommended to increase the check number appropriately**) |
+
+#### 4.5 UDF configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.udf.share.path | /mnt/bdap/udf | The storage path of the shared UDF, it is recommended to set it to the HDFS path |
+
+### 5. MicroService configuration parameters
+
+#### 5.1 Gateway configuration parameters
+
+| Parameter name | Default value | Description |
+| ------------------------- | ------- | --------------- --------------------------------------------|
+| wds.linkis.gateway.conf.enable.proxy.user | false | Whether to enable proxy user mode, if enabled, the login user’s request will be proxied to the proxy user for execution |
+| wds.linkis.gateway.conf.proxy.user.config | proxy.properties | Storage file of proxy rules |
+| wds.linkis.gateway.conf.proxy.user.scan.interval | 600000 | Proxy file refresh interval |
+| wds.linkis.gateway.conf.enable.token.auth | false | Whether to enable the Token login mode, if enabled, allow access to Linkis in the form of tokens |
+| wds.linkis.gateway.conf.token.auth.config | token.properties | Token rule storage file |
+| wds.linkis.gateway.conf.token.auth.scan.interval | 600000 | Token file refresh interval |
+| wds.linkis.gateway.conf.url.pass.auth | /dws/ | Request for default release without login verification |
+| wds.linkis.gateway.conf.enable.sso | false | Whether to enable SSO user login mode |
+| wds.linkis.gateway.conf.sso.interceptor | None | If the SSO login mode is enabled, the user needs to implement SSOInterceptor to jump to the SSO login page |
+| wds.linkis.admin.user | hadoop | Administrator user list |
+| wds.linkis.login_encrypt.enable | false | When the user logs in, does the password enable RSA encryption transmission |
+| wds.linkis.enable.gateway.auth | false | Whether to enable the Gateway IP whitelist mechanism |
+| wds.linkis.gateway.auth.file | auth.txt | IP whitelist storage file |
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Q&A.md b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Q&A.md
new file mode 100644
index 0000000..c78f440
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Q&A.md
@@ -0,0 +1,255 @@
+#### Q1, linkis startup error: NoSuchMethodErrorgetSessionManager()Lorg/eclipse/jetty/server/SessionManager
+
+Specific stack:
+```
+Failed startup of context osbwejJettyEmbeddedWebAppContext@6c6919ff{application,/,[file:///tmp/jetty-docbase.9102.6375358926927953589/],UNAVAILABLE} java.lang.NoSuchMethodError: org.eclipse.jetty.server.session.SessionHandler.getSessionManager ()Lorg/eclipse/jetty/server/SessionManager;
+at org.eclipse.jetty.servlet.ServletContextHandler\$Context.getSessionCookieConfig(ServletContextHandler.java:1415) ~[jetty-servlet-9.3.20.v20170531.jar:9.3.20.v20170531]
+```
+
+Solution: jetty-servlet and jetty-security versions need to be upgraded from 9.3.20 to 9.4.20;
+
+#### Q2. When starting the microservice linkis-ps-cs, report DebuggClassWriter overrides final method visit
+
+Specific exception stack:
+
+![linkis-exception-01.png](../Images/Tuning_and_Troubleshooting/linkis-exception-01.png)
+
+Solution: jar package conflict, delete asm-5.0.4.jar;
+
+#### Q3. When starting the microservice linkis-ps-datasource, JdbcUtils.getDriverClassName NPE
+
+Specific exception stack:
+
+![linkis-exception-02.png](../Images/Tuning_and_Troubleshooting/linkis-exception-02.png)
+
+
+Solution: caused by the Linkis-datasource configuration problem, modify the three parameters at the beginning of linkis.properties hive.meta:
+
+![hive-config-01.png](../Images/Tuning_and_Troubleshooting/hive-config-01.png)
+
+
+#### Q4. When starting the microservice linkis-ps-datasource, the following exception ClassNotFoundException HttpClient is reported:
+
+Specific exception stack:
+
+![linkis-exception-03.png](../Images/Tuning_and_Troubleshooting/linkis-exception-03.png)
+
+Solution: There is a problem with linkis-metadata-dev-1.0.0.jar compiled in 1.0, and it needs to be recompiled and packaged.
+
+#### Q5. Click scriptis-database, no data is returned, the phenomenon is as follows:
+
+![page-show-01.png](../Images/Tuning_and_Troubleshooting/page-show-01.png)
+
+Solution: The reason is that hive is not authorized to Hadoop users. The authorization data is as follows:
+
+![db-config-01.png](../Images/Tuning_and_Troubleshooting/db-config-01.png)
+
+#### Q6, shell engine scheduling execution, the page reports Insufficient resource, requesting available engine timeout, eneningeconnmanager linkis.out, and the following error is reported:
+
+![linkis-exception-04.png](../Images/Tuning_and_Troubleshooting/linkis-exception-04.png)
+
+Solution: The reason Hadoop did not create /appcom/tmp/hadoop/workDir. Create it in advance through the root user, and then authorize the Hadoop user.
+
+#### Q7. When the shell engine is scheduled for execution, the engine execution directory reports the following error /bin/java: No such file or directory:
+
+![shell-error-01.png](../Images/Tuning_and_Troubleshooting/shell-error-01.png)
+
+Solution: There is a problem with the local java environment variables, and you need to make a symbolic link to the java command.
+
+#### Q8, hive engine scheduling, the following error is reported EngineConnPluginNotFoundException:errorCode:70063
+
+![linkis-exception-05.png](../Images/Tuning_and_Troubleshooting/linkis-exception-05.png)
+
+Solution: It is caused by not modifying the version of the corresponding engine during installation, so the engine type inserted into the db by default is the default version, and the compiled version is not caused by the default version. Specific modification steps: cd /appcom/Install/dss-linkis/linkis/lib/linkis-engineconn-plugins/, modify the v2.1.1 directory name in the dist directory to v1.2.1 modify the subdirectory name in the plugin directory 2.1. 1 is 1.2.1 of the default versio [...]
+
+#### Q9. After the linkis microservice is started, the following error is reported: Load balancer does not have available server for client:
+
+![page-show-02.png](../Images/Tuning_and_Troubleshooting/page-show-02.png)
+
+Solution: This is because the linkis microservice has just started and the registration has not been completed. Wait for 1~2 minutes and try again.
+
+#### Q10. When the hive engine is scheduled for execution, the following error is reported: operation failed NullPointerException:
+
+![linkis-exception-06.png](../Images/Tuning_and_Troubleshooting/linkis-exception-06.png)
+
+
+Solution: The server lacks environment variables, add export HIVE_CONF_DIR=/etc/hive/conf in /etc/profile;
+
+#### Q11. When hive engine is scheduled, the error log of engineConnManager is as follows method did not exist: SessionHandler:
+
+![linkis-exception-07.png](../Images/Tuning_and_Troubleshooting/linkis-exception-07.png)
+
+Solution: Under the hive engine lib, the jetty jar package conflicts, replace jetty-security and jetty-server with 9.4.20;
+
+#### After Q12, hive engine restarts, the jar package of jetty 9.4 is always replaced by 9.3
+
+Solution: When the engine instance is generated, there will be a jar package cache. First, you need to delete the records related to the table linkis_engine_conn_plugin_bml_resources hive, and then delete the records under the directory /appcom/Install/dss-linkis/linkis/lib/linkis-engineconn-plugins/hive/dist 1.2.1.zip, finally restart the engineplugin service, the jar package of lib will be updated successfully.
+
+#### Q13. When the hive engine is executed, the following error is reported: Lcom/google/common/collect/UnmodifiableIterator:
+
+```
+2021-03-16 13:32:23.304 ERROR [pool-2-thread-1] com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor 140 run-query failed, reason: java.lang.IllegalAccessError: tried to access method com.google.common.collect.Iterators.emptyIterator() Lcom/google/common/collect/UnmodifiableIterator; from class org.apache.hadoop.hive.ql.exec.FetchOperator
+at org.apache.hadoop.hive.ql.exec.FetchOperator.<init>(FetchOperator.java:108) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.exec.FetchTask.initialize(FetchTask.java:86) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:629) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1414) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1543) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1332) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1321) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor$$anon$1.run(HiveEngineConnExecutor.scala:152) [linkis-engineplugin-hive-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor$$anon$1.run(HiveEngineConnExecutor.scala:126) [linkis-engineplugin-hive-dev-1.0.0.jar:?]
+```
+
+Solution: guava package conflict, kill guava-25.1-jre.jar under hive/dist/v1.2.1/lib;
+
+#### Q14. When the hive engine is executed, the error is reported as follows: TaskExecutionServiceImpl 59 error-org/apache/curator/connection/ConnectionHandlingPolicy:
+
+```
+2021-03-16 16:17:40.649 INFO [pool-2-thread-1] com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor 42 info-com.webank.wedatasphere.linkis.engineplugin.hive. executor.HiveEngineConnExecutor@36a7c96f change status Busy => Idle.
+2021-03-16 16:17:40.661 ERROR [pool-2-thread-1] com.webank.wedatasphere.linkis.engineconn.computation.executor.service.TaskExecutionServiceImpl 59 error-org/apache/curator/connection/ConnectionHandlingPolicy java .lang.NoClassDefFoundError: org/apache/curator/connection/ConnectionHandlingPolicy at org.apache.curator.framework.CuratorFrameworkFactory.builder(CuratorFrameworkFactory.java:78) ~[curator-framework-4.0.1.jar:4.0.1]
+at org.apache.hadoop.hive.ql.lockmgr.zookeeper.CuratorFrameworkSingleton.getInstance(CuratorFrameworkSingleton.java:59) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager.setContext(ZooKeeperHiveLockManager.java:98) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager.getLockManager(DummyTxnManager.java:87) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager.acquireLocks(DummyTxnManager.java:121) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.acquireLocksAndOpenTxn(Driver.java:1237) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1607) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1332) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1321) ~[hive-exec-2.1.1-cdh6.1.0.jar:2.1.1-cdh6.1.0]
+at com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor$$anon$1.run(HiveEngineConnExecutor.scala:152) ~[linkis-engineplugin-hive-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor$$anon$1.run(HiveEngineConnExecutor.scala:126) ~[linkis-engineplugin-hive-dev-1.0.0.jar:?]
+at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181]
+at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_181]
+at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) ~[hadoop-common-3.0.0-cdh6.3.2.jar:?]
+at com.webank.wedatasphere.linkis.engineplugin.hive.executor.HiveEngineConnExecutor.executeLine(HiveEngineConnExecutor.scala:126) ~[linkis-engineplugin-hive-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1$$anonfun$apply$9$$anonfun$apply$10.apply(ComputationExecutor.scala:145) ~[linkis-computation -engineconn-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1$$anonfun$apply$9$$anonfun$apply$10.apply(ComputationExecutor.scala:144) ~[linkis-computation -engineconn-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.common.utils.Utils$.tryCatch(Utils.scala:48) ~[linkis-common-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1$$anonfun$apply$9.apply(ComputationExecutor.scala:146) ~[linkis-computation-engineconn-dev-1.0 .0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1$$anonfun$apply$9.apply(ComputationExecutor.scala:140) ~[linkis-computation-engineconn-dev-1.0 .0.jar:?]
+at scala.collection.immutable.Range.foreach(Range.scala:160) ~[scala-library-2.11.8.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1.apply(ComputationExecutor.scala:139) ~[linkis-computation-engineconn-dev-1.0.0.jar:? ]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor$$anonfun$execute$1.apply(ComputationExecutor.scala:114) ~[linkis-computation-engineconn-dev-1.0.0.jar:? ]
+at com.webank.wedatasphere.linkis.common.utils.Utils$.tryFinally(Utils.scala:62) ~[linkis-common-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.acessible.executor.entity.AccessibleExecutor.ensureIdle(AccessibleExecutor.scala:42) ~[linkis-accessible-executor-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.acessible.executor.entity.AccessibleExecutor.ensureIdle(AccessibleExecutor.scala:36) ~[linkis-accessible-executor-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor.ensureOp(ComputationExecutor.scala:103) ~[linkis-computation-engineconn-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.execute.ComputationExecutor.execute(ComputationExecutor.scala:114) ~[linkis-computation-engineconn-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.service.TaskExecutionServiceImpl$$anon$1$$anonfun$run$1.apply$mcV$sp(TaskExecutionServiceImpl.scala:139) [linkis-computation-engineconn-dev- 1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.service.TaskExecutionServiceImpl$$anon$1$$anonfun$run$1.apply(TaskExecutionServiceImpl.scala:138) [linkis-computation-engineconn-dev-1.0.0. jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.service.TaskExecutionServiceImpl$$anon$1$$anonfun$run$1.apply(TaskExecutionServiceImpl.scala:138) [linkis-computation-engineconn-dev-1.0.0. jar:?]
+at com.webank.wedatasphere.linkis.common.utils.Utils$.tryCatch(Utils.scala:48) [linkis-common-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.common.utils.Utils$.tryAndWarn(Utils.scala:74) [linkis-common-dev-1.0.0.jar:?]
+at com.webank.wedatasphere.linkis.engineconn.computation.executor.service.TaskExecutionServiceImpl$$anon$1.run(TaskExecutionServiceImpl.scala:138) [linkis-computation-engineconn-dev-1.0.0.jar:?]
+at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181]
+at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181]
+at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
+at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
+at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
+Caused by: java.lang.ClassNotFoundException: org.apache.curator.connection.ConnectionHandlingPolicy atjava.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[?:1.8.0_181]
+at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:1.8.0_181]
+at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) ~[?:1.8.0_181]
+at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_181]
+... 39 more
+```
+
+Solution: The reason is that there is a corresponding relationship between the version of Curator and the version of zookeeper. For Curator2.X, it supports Zookeeper3.4.X for Curator2.X, so if you are currently Zookeeper3.4.X, you should still use Curator2.X, for example: 2.7.0. Reference link: https://blog.csdn.net/muyingmiao/article/details/100183768
+
+#### Q15. When the python engine is scheduled, the following error is reported: Python proces is not alive:
+
+![linkis-exception-08.png](../Images/Tuning_and_Troubleshooting/linkis-exception-08.png)
+
+Solution: The server installed the anaconda3 package manager. After debugging python, two problems were found: (1) lack of pandas and matplotlib modules, which need to be installed manually; (2) when the new version of the python engine is executed, it depends on the higher version of python, first install python3, Next, make a symbolic link (as shown in the figure below) and restart the engineplugin service.
+
+![shell-error-02.png](../Images/Tuning_and_Troubleshooting/shell-error-02.png)
+
+#### Q16. When the spark engine is executed, the following error NoClassDefFoundError: org/apache/hadoop/hive/ql/io/orc/OrcFile is reported:
+
+```
+2021-03-19 15:12:49.227 INFO [dag-scheduler-event-loop] org.apache.spark.scheduler.DAGScheduler 57 logInfo -ShuffleMapStage 5 (show at <console>:69) failed in 21.269 s due to Job aborted due to stage failure: Task 1 in stage 5.0 failed 4 times, most recent failure: Lost task 1.3 in stage 5.0 (TID 139, cdh03, executor 6): java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql /io/orc/OrcFile
+at org.apache.spark.sql.hive.orc.OrcFileOperator$$anonfun$getFileReader$2.apply(OrcFileOperator.scala:75)
+at org.apache.spark.sql.hive.orc.OrcFileOperator$$anonfun$getFileReader$2.apply(OrcFileOperator.scala:73)
+at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
+at scala.collection.TraversableOnce$class.collectFirst(TraversableOnce.scala:145)
+at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1334)
+at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:90)
+at org.apache.spark.sql.hive.orc.OrcFileOperator$$anonfun$readSchema$2.apply(OrcFileOperator.scala:99)
+at org.apache.spark.sql.hive.orc.OrcFileOperator$$anonfun$readSchema$2.apply(OrcFileOperator.scala:99)
+at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
+at scala.collection.TraversableOnce$class.collectFirst(TraversableOnce.scala:145)
+at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1334)
+at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:99)
+at org.apache.spark.sql.hive.orc.OrcFileFormat$$anonfun$buildReader$2.apply(OrcFileFormat.scala:160)
+at org.apache.spark.sql.hive.orc.OrcFileFormat$$anonfun$buildReader$2.apply(OrcFileFormat.scala:151)
+at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:148)
+at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
+at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:126)
+at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:179)
+at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:103)
+at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(UnknownSource)
+at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
+at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$11$$anon$1.hasNext(WholeStageCodegenExec.scala:624)
+at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
+at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
+at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
+at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
+at org.apache.spark.scheduler.Task.run(Task.scala:121)
+at org.apache.spark.executor.Executor$TaskRunner$$anonfun$11.apply(Executor.scala:407)
+at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1408)
+at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:413)
+at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
+at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
+at java.lang.Thread.run(Thread.java:748)
+Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.io.orc.OrcFile
+at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
+at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
+at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
+at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
+... 33 more
+
+```
+
+Solution: cdh6.3.2 cluster spark engine classpath only has /opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/spark/jars, need to add hive-exec-2.1.1- cdh6.1.0.jar, then restart spark.
+
+#### Q17. When the spark engine starts, it reports queue default is not exists in YARN, the specific information is as follows:
+
+![linkis-exception-09.png](../Images/Tuning_and_Troubleshooting/linkis-exception-09.png)
+
+Solution: When the 1.0 linkis-resource-manager-dev-1.0.0.jar pulls queue information, there is a compatibility problem in parsing json. After the official classmates optimize it, re-provide a new package. The jar package path: /appcom/Install/dss- linkis/linkis/lib/linkis-computation-governance/linkis-cg-linkismanager/.
+
+#### Q18, when the spark engine starts, an error is reported get the Yarn queue information excepiton. (get the Yarn queue information abnormal) and http link abnormal
+
+Solution: To migrate the address configuration of yarn to the DB configuration, the following configuration needs to be added:
+ 
+![db-config-02.png](../Images/Tuning_and_Troubleshooting/db-config-02.png)
+
+#### Q19. When the spark engine is scheduled, it can be executed successfully for the first time, and if executed again, it will report Spark application sc has already stopped, please restart it. The specific errors are as follows:
+
+![page-show-03.png](../Images/Tuning_and_Troubleshooting/page-show-03.png)
+
+Solution: The background is that the architecture of the linkis1.0 engine has been adjusted. After the spark session is created, in order to avoid overhead and improve execution efficiency, the session is reused. When we execute spark.scala for the first time, there is spark.stop() in our script. This command will cause the newly created session to be closed. When executed again, it will prompt that the session is closed, please restart it. Solution: first remove stop() from all scripts, [...]
+
+#### Q20, pythonspark scheduling execution, error: initialize python executor failed ClassNotFoundException org.slf4j.impl.StaticLoggerBinder, as follows:
+
+![linkis-exception-10.png](../Images/Tuning_and_Troubleshooting/linkis-exception-10.png)
+
+Solution: The reason is that the spark server lacks slf4j-log4j12-1.7.25.jar, copy the above jar and report to /opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/spark/jars .
+
+#### Q21, pythonspark scheduling execution, error: initialize python executor failed, submit-version error, as follows:
+
+![shell-error-03.png](../Images/Tuning_and_Troubleshooting/shell-error-03.png)
+
+Solution: The reason is that the linkis1.0 pythonSpark engine has a bug in obtaining the spark version code. The fix is ​​as follows:
+
+![code-fix-01.png](../Images/Tuning_and_Troubleshooting/code-fix-01.png)
+
+#### Q22. When pythonspark is scheduled to execute, it reports TypeError: an integer is required (got type bytes) (executed separately from the command to pull up the engine), the details are as follows:
+
+![shell-error-04.png](../Images/Tuning_and_Troubleshooting/shell-error-04.png)
+
+Solution: The reason is that the system spark and python versions are not compatible, python is 3.8, spark is 2.4.0-cdh6.3.2, spark requires python version<=3.6, reduce python to 3.6, comment file /opt/cloudera/parcels/CDH/ The following lines of lib/spark/python/lib/pyspark.zip/pyspark/context.py:
+
+![shell-error-05.png](../Images/Tuning_and_Troubleshooting/shell-error-05.png)
+
+#### Q23, spark engine is 2.4.0+cdh6.3.2, python engine was previously lacking pandas, matplotlib upgraded local python to 3.8, but spark does not support python3.8, only supports below 3.6;
+
+Solution: reinstall the python package manager anaconda2, reduce python to 2.7, install pandas, matplotlib modules, python engine and spark engine can be scheduled normally.
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/README.md b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/README.md
new file mode 100644
index 0000000..a92dca4
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/README.md
@@ -0,0 +1,98 @@
+## Tuning and troubleshooting
+
+In the process of preparing for the release of a version, we will try our best to find deployment and installation problems in advance and then repair them. Because everyone has some differences in the deployment environments, we sometimes have no way to predict all the problems and solutions in advance. However, due to the existence of the community, many of your problems will overlap. Perhaps the installation and deployment problems you have encountered have already been discovered and [...]
+
+### Ⅰ. How to locate the exception log
+
+If an interface request reports an error, we can locate the problematic microservice based on the return of the interface. Under normal circumstances, we can **locate according to the URL specification. **URLs in the Linkis interface follow certain design specifications. That is, the format of **/api/rest_j/v1/{applicationName}/.+**, the application name can be located through applicationName. Some applications themselves are microservices. At this time, the application name is the same  [...]
+
+| **ApplicationName** | **Microservice** |
+| -------------------- | -------------------- |
+| cg-linkismanager | cg-linkismanager |
+| cg-engineplugin | cg-engineplugin |
+| cg-engineconnmanager | cg-engineconnmanager |
+| cg-entrance | cg-entrance |
+| ps-bml | ps-bml |
+| ps-cs | ps-cs |
+| ps-datasource | ps-datasource |
+| configuration | |
+| instance-label | |
+| jobhistory | ps-publicservice |
+| variable | |
+| udf | |
+
+### Ⅱ. community issue column search keywords
+
+On the homepage of the github community, the issue column retains some of the problems and solutions encountered by community users, which is very suitable for quickly finding solutions after encountering problems, just search for keywords that report errors in the filter filter.
+
+### Ⅲ. "Q\&A Question Summary"
+
+"Linkis 1.0 FAQ", this document contains a summary of common problems and solutions during the installation and deployment process.
+
+### Ⅳ. Locating system log
+
+Generally, errors can be divided into three stages: an error is reported when installing and executing install.sh, an error is reported when the microservice is started, and an error is reported when the engine is started.
+
+1. **An error occurred when executing install.sh**, usually in the following situations
+
+   1. Missing environment variables: For example, the environment of java/python/Hadoop/hive/spark needs to be configured under the standard version, and the corresponding verification operation will be performed when the script is installed. If you encounter this kind of problem, there will be a lot of problems. Clear prompts for missing environment variables, such as exception -bash
+      spark-submit: command not found, etc.
+
+   2. The system version does not match: Linkis currently supports most versions of Linux.
+      The compatibility of the os version is the best, and some system versions may have command incompatibility. For example, the poor compatibility of yum in ubantu may cause yum-related errors in the installation and deployment. In addition, it is also recommended not to use windows as much as possible. Deploying linkis, currently no script is fully compatible with the .bat command.
+
+   3. Missing configuration item: There are two configuration files that need to be modified in linkis1.0 version, linkis-env.sh and db.sh
+   
+      The former contains the environment parameters that linkis needs to load during execution, and the latter is the database information that linkis itself needs to store related tables. Under normal circumstances, if the corresponding configuration is missing, the error message will show an exception related to the Key value. For example, when db.sh does not fill in the relevant database configuration, unknow will appear mysql server host ‘-P’ is abnormal, which is caused by missing host.
+
+2. **Report error when starting microservice**
+
+    Linkis puts the log files of all microservices into the logs directory. The log directory levels are as follows:
+
+    ````
+    ├── linkis-computation-governance
+    │ ├── linkis-cg-engineconnmanager
+    │ ├── linkis-cg-engineplugin
+    │ ├── linkis-cg-entrance
+    │ └── linkis-cg-linkismanager
+    ├── linkis-public-enhancements
+    │ ├── linkis-ps-bml
+    │ ├── linkis-ps-cs
+    │ ├── linkis-ps-datasource
+    │ └── linkis-ps-publicservice
+    └── linkis-spring-cloud-services
+    │ ├── linkis-mg-eureka
+    └─├── linkis-mg-gateway
+    ````
+
+    It includes three microservice modules: computing governance, public enhancement, and microservice management. Each microservice contains three logs, linkis-gc.log, linkis.log, and linkis.out, corresponding to the service's GC log, service log, and service System.out log.
+    
+    Under normal circumstances, when an error occurs when starting a microservice, you can cd to the corresponding service in the log directory to view the related log to troubleshoot the problem. Generally, the most frequently occurring problems can also be divided into three categories:
+
+    1.	**Port Occupation**: Since the default port of Linkis microservices is mostly concentrated at 9000, it is necessary to check whether the port of each microservice is occupied by other microservices before starting. If it is occupied, you need to change conf/ The microservice port corresponding to the linkis-env.sh file
+    
+    2.	**Necessary configuration parameters are missing**: For some microservices, certain user-defined parameters must be loaded before they can be started normally. For example, the linkis-cg-engineplugin microservice will load conf/ when it starts. For the configuration related to wds.linkis.engineconn.\* in linkis.properties, if the user changes the Linkis path after installation, if the configuration does not correspond to the modification, an error will be reported when the linkis- [...]
+    
+    3.	**System environment is not compatible**: It is recommended that users refer to the recommended system and application versions in the official documents as much as possible when deploying and installing, and install necessary system plug-ins, such as expect, yum, etc. If the application version is not compatible, It may cause errors related to the application. For example, the incompatibility of SQL statements in the mysql5.7 version may cause errors in the linkis.ddl and linkis. [...]
+    
+3. **Report error during microservice execution period**
+
+    The situation of error reporting during the execution of microservices is more complicated, and the situations encountered are also different depending on the environment, but the troubleshooting methods are basically the same. Starting from the corresponding microservice error catalog, we can roughly divide it into three situations:
+    
+    1. **Manually installed and deployed microservices report errors**: The logs of this type of microservice are unified under the log/ directory. After locating the microservice, enter the corresponding directory to view it.
+    
+    2. **engine start failure**: insufficient resources, request engine failure: When this type of error occurs, it is not necessarily due to insufficient resources, because the front end will only grab the logs after the Spring project is started, for errors before the engine is started cannot be fetched well. There are three kinds of high-frequency problems found in the actual use process of internal test users:
+    
+        a. **The engine cannot be created because there is no engine directory permission**: The log will be printed to the linkis.out file under the cg-engineconnmanager microservice. You need to enter the file to view the specific reason.
+        
+        b. **There is a dependency conflict in the engine lib package**, **The server cannot start normally because of insufficient memory resources: **Since the engine directory has been created, the log will be printed to the stdout file under the engine, and the engine path can refer to c
+        
+        c. **Errors reported during engine execution**: Each started engine is a microservice that is dynamically loaded and started during runtime. When the engine is started, if an error occurs, you need to find the corresponding log of the engine in the corresponding startup user directory. The corresponding root path is **ENGINECONN_ROOT_PATH** filled in **linkis-env** before installation. If you need to modify the path after installation, you need to modify wds.linkis.engineconn.roo [...]
+        
+### Ⅴ. Community user group consultation and communication
+
+For problems that cannot be resolved according to the above process positioning during the installation and deployment process, you can send error messages in our community group. In order to facilitate community partners and developers to help solve them and improve efficiency, it is recommended that when you ask questions, You can describe the problem phenomenon, related log information, and the places that have been checked are sent out together. If you think it may be an environmenta [...]
+
+### Ⅵ. locate the source code by remote debug
+
+Under normal circumstances, remote debugging of source code is the most effective way to locate problems, but compared to document review, users need to have a certain understanding of the source code structure. It is recommended that you check the [Linkis source code level detailed structure](https://github.com/WeBankFinTech/Linkis/wiki/Linkis%E6%BA%90%E7%A0%81%E5%B1%82%E7%BA%A7%E7%BB%93%E6%9E%84%E8%AF%A6%E8%A7%A3) in the Linkis WIKI before remote debugging.After having a certain degree [...]
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Tuning.md b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Tuning.md
new file mode 100644
index 0000000..2b6b256
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Tuning_and_Troubleshooting/Tuning.md
@@ -0,0 +1,61 @@
+>Linkis0.x version runs stably on the production environment of WeBank, and supports various businesses. Linkis1.0 is an optimized version of 0.x, and the related tuning logic has not changed, so this document will introduce several Linkis deployment and tuning suggestions. Due to limited space, this article cannot cover all optimization scenarios. Related tuning guides will also be supplemented and updated. Of course, we also hope that community users will provide suggestions for Linkis [...]
+
+## 1. Overview
+
+This document will introduce several tuning methods based on production experience, namely the selection of Jvm heap size during deployment in production, the setting of concurrency for task submission, and the introduction of task running resource application parameters. The parameter settings described in the document are not recommended parameter values. Users need to select the parameters according to their actual production environment.
+
+## 2. Jvm heap size tuning 
+
+When installing Linkis, you can find the following variables in linkis-env.sh:
+
+```shell
+SERVER_HEAP_SIZE="512M"
+```
+
+After setting this variable, it will be added to the java startup parameters of each microservice during installation to control the Jvm startup heap size. Although the xms and xmx parameters need to be set when java is started, they are usually set to the same value. In production, as the number of users increases, this parameter needs to be adjusted larger to meet the needs. Of course, setting a larger stack memory requires a larger server configuration. Also, single-machine deployment [...]
+
+## 3. Tuning the concurrency of task submission
+
+Some Linkis task concurrency parameters will have a default value. In most scenarios, the default value can meet the demand, but sometimes it cannot, so it needs to be adjusted. This article will introduce several parameters for adjusting the concurrency of tasks to facilitate users to optimize concurrent tasks in production.
+
+Since tasks are submitted by RPC, in the linkis-common/linkis-rpc module, you can configure the following parameters to increase the number of concurrent rpc:
+
+```shell
+wds.linkis.rpc.receiver.asyn.consumer.thread.max=400
+wds.linkis.rpc.receiver.asyn.queue.size.max=5000
+wds.linkis.rpc.sender.asyn.consumer.thread.max=100
+wds.linkis.rpc.sender.asyn.queue.size.max=2000
+```
+
+In the Linkis source code, we set a default value for the number of concurrent tasks, which can meet the needs in most scenarios. However, when a large number of concurrent tasks are submitted for execution in some scenarios, such as when Qualitis (another open source project of WeBank) is used for mass data verification, initCapacity and maxCapacity have not been upgraded to a configurable item in the current version. Users need to modify, by increasing the values of these two parameter [...]
+
+```java
+  private val groupNameToGroups = new JMap[String, Group]
+  private val labelBuilderFactory = LabelBuilderFactoryContext.getLabelBuilderFactory
+
+  override def getOrCreateGroup(groupName: String): Group = {
+    if (!groupNameToGroups.containsKey(groupName)) synchronized {
+      val initCapacity = 100
+      val maxCapacity = 100
+      // 其它代码...
+        }
+      }
+```
+
+## 4. Resource settings related to task runtime
+
+When submitting a task to run on Yarn, Yarn provides a configurable interface. As a highly scalable framework, Linkis can also be configured to set resource configuration.
+
+The related configuration of Spark and Hive are as follows:
+
+Part of the Spark configuration in linkis-engineconn-plugins/engineconn-plugins, you can adjust the configuration to change the runtime environment of tasks submitted to Yarn. Due to limited space, such as more about Hive, Yarn configuration requires users to refer to the source code and the parameters documentation.
+
+```shell
+"spark.driver.memory" = 2 //单位为G
+"wds.linkis.driver.cores" = 1
+"spark.executor.memory" = 4 //单位为G
+"spark.executor.cores" = 2
+"spark.executor.instances" = 3
+"wds.linkis.rm.yarnqueue" = "default"
+```
+
diff --git a/Linkis-Doc-master/en_US/Upgrade_Documents/Linkis_Upgrade_from_0.x_to_1.0_guide.md b/Linkis-Doc-master/en_US/Upgrade_Documents/Linkis_Upgrade_from_0.x_to_1.0_guide.md
new file mode 100644
index 0000000..dc1b867
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Upgrade_Documents/Linkis_Upgrade_from_0.x_to_1.0_guide.md
@@ -0,0 +1,73 @@
+ > This article briefly introduces the precautions for upgrading Linkis from 0.X to 1.0. Linkis 1.0 has adjusted several Linkis services with major changes. This article will introduce the precautions for upgrading from 0.X to 1.X.
+
+## 1.Precautions
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**If you are using Linkis for the first time, you can ignore this chapter; if you are already a user of Linkis, it is recommended to read it before installing or upgrading:[Brief description of the difference between Linkis1.0 and Linkis0.X](https://github.com/WeBankFinTech/Linkis/wiki/Linkis1.0%E4%B8%8ELinkis0.X%E7%9A%84%E5%8C%BA%E5%88%AB%E7%AE%80%E8%BF%B0)**.
+
+## 2. Service upgrade installation
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Because linkis 1.0 basically upgraded all services, including service names, all services need to be reinstalled when upgrading from 0.X to 1.X.
+
+&nbsp;&nbsp;&nbsp;&nbsp;  If you need to keep 0.X data during the upgrade, you must select 1 to skip the table building statement (see the code below).
+
+&nbsp;&nbsp;&nbsp;&nbsp;  For the installation of Linkis1.0, please refer to [Quick Deployment Linkis1.0](../Deployment_Documents/Quick_Deploy_Linkis1.0.md)
+
+```
+Do you want to clear Linkis table information in the database?
+1: Do not execute table-building statements
+2: Dangerous! Clear all data and rebuild the tables
+other: exit
+
+Please input the choice: ## choice 1
+```
+## 3. Database upgrade
+
+&nbsp;&nbsp;&nbsp;&nbsp;  After the service is installed, the database structure needs to be modified, including table structure changes and new tables and data:
+
+### 3.1 Table structure modification part:
+
+&nbsp;&nbsp;&nbsp;&nbsp;  linkis_task: The submit_user and label_json fields are added to the table. The update statement is:
+
+```mysql-sql
+ALTER TABLE linkis_task ADD submit_user varchar(50) DEFAULT NULL COMMENT 'submitUser name';
+ALTER TABLE linkis_task ADD `label_json` varchar(200) DEFAULT NULL COMMENT 'label json';
+```
+
+### 3.2 Need newly executed sql:
+
+```mysql-sql
+cd db/module
+## Add the tables that the enginePlugin service depends on:
+source linkis_ecp.sql
+## Add a table that the public service-instanceLabel service depends on
+source linkis_instance_label.sql
+## Added tables that the linkis-manager service depends on
+source linkis_manager.sql
+```
+
+### 3.3 Publicservice-Configuration table modification
+
+&nbsp;&nbsp;&nbsp;&nbsp;  In order to support the full labeling capability of Linkis 1.X, all the data tables related to the configuration module have been upgraded to labeling, which is completely different from the 0.X Configuration table. It is necessary to re-execute the table creation statement and the initialization statement.
+
+&nbsp;&nbsp;&nbsp;&nbsp;  This means that **Linkis0.X users' existing engine configuration parameters can no longer be migrated to Linkis1.0** (it is recommended that users reconfigure the engine parameters once).
+
+&nbsp;&nbsp;&nbsp;&nbsp;  The execution of the table building statement is as follows:
+
+```mysql-sql
+source linkis_configuration.sql
+```
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Because Linkis 1.0 supports multiple versions of the engine, it is necessary to modify the version of the engine when executing the initialization statement, as shown below:
+
+```mysql-sql
+vim linkis_configuration_dml.sql
+## Modify the default version of the corresponding engine
+SET @SPARK_LABEL="spark-2.4.3";
+SET @HIVE_LABEL="hive-1.2.1";
+## Execute the initialization statement
+source linkis_configuration_dml.sql
+```
+
+## 4. Installation and startup Linkis1.0
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Start Linkis 1.0  to verify whether the service has been started normally and provide external services. For details, please refer to: [Quick Deployment Linkis1.0](../Deployment_Documents/Quick_Deploy_Linkis1.0.md)
diff --git a/Linkis-Doc-master/en_US/Upgrade_Documents/README.md b/Linkis-Doc-master/en_US/Upgrade_Documents/README.md
new file mode 100644
index 0000000..37786ab
--- /dev/null
+++ b/Linkis-Doc-master/en_US/Upgrade_Documents/README.md
@@ -0,0 +1,5 @@
+The architecture of Linkis1.0 is very different from Linkis0.x, and there are some changes to the configuration of the deployment package and database tables. Before you install Linkis1.0, please read the following instructions carefully:
+
+1. If you are installing Linkis for the first time, or reinstalling Linkis, you do not need to pay attention to the Linkis Upgrade Guide.
+
+2. If you are upgrading from Linkis0.x to Linkis1.0, be sure to read the [Linkis Upgrade from 0.x to 1.0 guide](Linkis_Upgrade_from_0.x_to_1.0_guide.md) carefully.
diff --git a/Linkis-Doc-master/en_US/User_Manual/How_To_Use_Linkis.md b/Linkis-Doc-master/en_US/User_Manual/How_To_Use_Linkis.md
new file mode 100644
index 0000000..a6ee4d7
--- /dev/null
+++ b/Linkis-Doc-master/en_US/User_Manual/How_To_Use_Linkis.md
@@ -0,0 +1,29 @@
+# How to use Linkis?
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;In order to meet the needs of different usage scenarios, Linkis provides a variety of usage and access methods, which can be summarized into three categories, namely Client-side use, Scriptis-side use, and DataSphere It is used on the Studio side, among which Scriptis and DataSphere Studio are the open source data analysis platforms of the WeBank Big Data Platform Room. Since these two projects are essentially compatible with Linkis, it is  [...]
+
+## 1. Client side usage
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;If you need to connect to other applications on the basis of Linkis, you need to develop the interface provided by Linkis. Linkis provides a variety of client access interfaces. For detailed usage introduction, please refer to the following:
+-[**Restful API Usage**](./../API_Documentations/Linkis task submission and execution RestAPI document.md)
+-[**JDBC API Usage**](./../API_Documentations/Task Submit and Execute JDBC_API Document.md)
+-[**How ​​to use Java SDK**](./../User_Manual/Linkis1.0 user use document.md)
+
+## 2. Scriptis uses Linkis
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;If you need to use Linkis to complete interactive online analysis and processing, and you do not need data analysis application tools such as workflow development, workflow scheduling, data services, etc., you can Install [**Scriptis**](https://github.com/WeBankFinTech/Scriptis) separately. For detailed installation tutorial, please refer to its corresponding installation and deployment documents.
+
+## 2.1. Use Scriptis to execute scripts
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Currently Scriptis supports submitting a variety of task types to Linkis, including Spark SQL, Hive SQL, Scala, PythonSpark, etc. In order to meet the needs of data analysis, the left side of Scriptis, Provides viewing user workspace information, user database and table information, user-defined functions, and HDFS directories. It also supports uploading and downloading, result set exporting and other functions. Scriptis is very simple to u [...]
+![Scriptis uses Linkis](../Images/EngineUsage/sparksql-run.png)
+
+## 2.2. Scriptis Management Console
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis provides an interface for resource configuration and management. If you want to configure and manage task resources, you can set it on the Scriptis management console interface, including queue settings and resource configuration , The number of engine instances, etc. Through the management console, you can easily configure the resources for submitting tasks to Linkis, making it more convenient and faster.
+![Scriptis uses Linkis](../Images/EngineUsage/queue-set.png)
+
+## 3. DataSphere Studio uses Linkis
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[**DataSphere Studio**](https://github.com/WeBankFinTech/DataSphereStudio), referred to as DSS, is an open source part of WeBank’s big data platform Station-type data analysis and processing platform, the DSS interactive analysis module integrates Scriptis. Using DSS for interactive analysis is the same as Scriptis. In addition to providing the basic functions of Scriptis, DSS provides and integrates richer and more powerful data analysis f [...]
+![DSS Run Workflow](../Images/EngineUsage/workflow.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/User_Manual/Linkis1.0_User_Manual.md b/Linkis-Doc-master/en_US/User_Manual/Linkis1.0_User_Manual.md
new file mode 100644
index 0000000..b613f88
--- /dev/null
+++ b/Linkis-Doc-master/en_US/User_Manual/Linkis1.0_User_Manual.md
@@ -0,0 +1,400 @@
+# Linkis User Manual
+
+> Linkis provides a convenient interface for calling JAVA and SCALA. It can be used only by introducing the linkis-computation-client module. After 1.0, the method of submitting with Label is added. The following will introduce both ways that compatible with 0.X and newly added in 1.0.
+
+## 1. Introduce dependent modules
+```
+<dependency>
+   <groupId>com.webank.wedatasphere.linkis</groupId>
+   <artifactId>linkis-computation-client</artifactId>
+   <version>${linkis.version}</version>
+</dependency>
+Such as:
+<dependency>
+   <groupId>com.webank.wedatasphere.linkis</groupId>
+   <artifactId>linkis-computation-client</artifactId>
+   <version>1.0.0-RC1</version>
+</dependency>
+```
+
+## 2. Compatible with 0.X Execute method submission
+
+### 2.1 Java test code
+
+Create the Java test class UJESClientImplTestJ. Refer to the comments to understand the purposes of those interfaces:
+
+```java
+package com.webank.wedatasphere.linkis.client.test;
+
+import com.webank.wedatasphere.linkis.common.utils.Utils;
+import com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy;
+import com.webank.wedatasphere.linkis.httpclient.dws.authentication.TokenAuthenticationStrategy;
+import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfig;
+import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder;
+import com.webank.wedatasphere.linkis.ujes.client.UJESClient;
+import com.webank.wedatasphere.linkis.ujes.client.UJESClientImpl;
+import com.webank.wedatasphere.linkis.ujes.client.request.JobExecuteAction;
+import com.webank.wedatasphere.linkis.ujes.client.request.ResultSetAction;
+import com.webank.wedatasphere.linkis.ujes.client.response.JobExecuteResult;
+import com.webank.wedatasphere.linkis.ujes.client.response.JobInfoResult;
+import com.webank.wedatasphere.linkis.ujes.client.response.JobProgressResult;
+import org.apache.commons.io.IOUtils;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+public class LinkisClientTest {
+
+    public static void main(String[] args){
+
+        String user = "hadoop";
+        String executeCode = "show databases;";
+
+        // 1. Configure DWSClientBuilder, get a DWSClientConfig through DWSClientBuilder
+        DWSClientConfig clientConfig = ((DWSClientConfigBuilder) (DWSClientConfigBuilder.newBuilder()
+                .addServerUrl("http://${ip}:${port}")  //Specify ServerUrl, the address of the linkis gateway, such as http://{ip}:{port}
+                .connectionTimeout(30000)   //connectionTimeOut Client connection timeout
+                .discoveryEnabled(false).discoveryFrequency(1, TimeUnit.MINUTES)  //Whether to enable registration discovery, if enabled, the newly launched Gateway will be automatically discovered
+                .loadbalancerEnabled(true)  // Whether to enable load balancing, if registration discovery is not enabled, load balancing is meaningless
+                .maxConnectionSize(5)   //Specify the maximum number of connections, that is, the maximum number of concurrent
+                .retryEnabled(false).readTimeout(30000)   //Execution failed, whether to allow retry
+                .setAuthenticationStrategy(new StaticAuthenticationStrategy())   //AuthenticationStrategy Linkis login authentication method
+                .setAuthTokenKey("${username}").setAuthTokenValue("${password}")))  //Authentication key, generally the user name; authentication value, generally the password corresponding to the user name
+                .setDWSVersion("v1").build();  //The version of the linkis backend protocol, the current version is v1
+
+        // 2. Obtain a UJESClient through DWSClientConfig
+        UJESClient client = new UJESClientImpl(clientConfig);
+
+        try {
+            // 3. Start code execution
+            System.out.println("user : " + user + ", code : [" + executeCode + "]");
+            Map<String, Object> startupMap = new HashMap<String, Object>();
+            startupMap.put("wds.linkis.yarnqueue", "default"); // A variety of startup parameters can be stored in startupMap, see linkis management console configuration
+            JobExecuteResult jobExecuteResult = client.execute(JobExecuteAction.builder()
+                    .setCreator("linkisClient-Test")  //creator,the system name of the client requesting linkis, used for system-level isolation
+                    .addExecuteCode(executeCode)   //ExecutionCode Requested code
+                    .setEngineType((JobExecuteAction.EngineType) JobExecuteAction.EngineType$.MODULE$.HIVE()) // The execution engine type of the linkis that you want to request, such as Spark hive, etc.
+                    .setUser(user)   //User,Requesting users; used for user-level multi-tenant isolation
+                    .setStartupParams(startupMap)
+                    .build());
+            System.out.println("execId: " + jobExecuteResult.getExecID() + ", taskId: " + jobExecuteResult.taskID());
+
+            // 4. Get the execution status of the script
+            JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
+            int sleepTimeMills = 1000;
+            while(!jobInfoResult.isCompleted()) {
+                // 5. Get the execution progress of the script
+                JobProgressResult progress = client.progress(jobExecuteResult);
+                Utils.sleepQuietly(sleepTimeMills);
+                jobInfoResult = client.getJobInfo(jobExecuteResult);
+            }
+
+            // 6. Get the job information of the script
+            JobInfoResult jobInfo = client.getJobInfo(jobExecuteResult);
+            // 7. Get a list of result sets (if the user submits multiple SQL at a time, multiple result sets will be generated)
+            String resultSet = jobInfo.getResultSetList(client)[0];
+            // 8. Get a specific result set through a result set information
+            Object fileContents = client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build()).getFileContent();
+            System.out.println("fileContents: " + fileContents);
+
+        } catch (Exception e) {
+            e.printStackTrace();
+            IOUtils.closeQuietly(client);
+        }
+        IOUtils.closeQuietly(client);
+    }
+}
+```
+
+Run the above code to interact with Linkis
+
+### 3. Scala test code:
+
+```scala
+package com.webank.wedatasphere.linkis.client.test
+
+import java.util.concurrent.TimeUnit
+
+import com.webank.wedatasphere.linkis.common.utils.Utils
+import com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy
+import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder
+import com.webank.wedatasphere.linkis.ujes.client.UJESClient
+import com.webank.wedatasphere.linkis.ujes.client.request.JobExecuteAction.EngineType
+import com.webank.wedatasphere.linkis.ujes.client.request.{JobExecuteAction, ResultSetAction}
+import org.apache.commons.io.IOUtils
+
+object LinkisClientImplTest extends App {
+
+  var executeCode = "show databases;"
+  var user = "hadoop"
+
+  // 1. Configure DWSClientBuilder, get a DWSClientConfig through DWSClientBuilder
+  val clientConfig = DWSClientConfigBuilder.newBuilder()
+    .addServerUrl("http://${ip}:${port}") //Specify ServerUrl, the address of the Linkis server-side gateway, such as http://{ip}:{port}
+    .connectionTimeout(30000) //connectionTimeOut client connection timeout
+    .discoveryEnabled(false).discoveryFrequency(1, TimeUnit.MINUTES) //Whether to enable registration discovery, if enabled, the newly launched Gateway will be automatically discovered
+    .loadbalancerEnabled(true) // Whether to enable load balancing, if registration discovery is not enabled, load balancing is meaningless
+    .maxConnectionSize(5) //Specify the maximum number of connections, that is, the maximum number of concurrent
+    .retryEnabled(false).readTimeout(30000) //execution failed, whether to allow retry
+    .setAuthenticationStrategy(new StaticAuthenticationStrategy()) //AuthenticationStrategy Linkis authentication method
+    .setAuthTokenKey("${username}").setAuthTokenValue("${password}") //Authentication key, generally the user name; authentication value, generally the password corresponding to the user name
+    .setDWSVersion("v1").build() //Linkis backend protocol version, the current version is v1
+
+  // 2. Get a UJESClient through DWSClientConfig
+  val client = UJESClient(clientConfig)
+  
+  try {
+    // 3. Start code execution
+    println("user: "+ user + ", code: [" + executeCode + "]")
+    val startupMap = new java.util.HashMap[String, Any]()
+    startupMap.put("wds.linkis.yarnqueue", "default") //Startup parameter configuration
+    val jobExecuteResult = client.execute(JobExecuteAction.builder()
+      .setCreator("LinkisClient-Test") //creator, requesting the system name of the Linkis client, used for system-level isolation
+      .addExecuteCode(executeCode) //ExecutionCode The code to be executed
+      .setEngineType(EngineType.SPARK) // The execution engine type of Linkis that you want to request, such as Spark hive, etc.
+      .setStartupParams(startupMap)
+      .setUser(user).build()) //User, request user; used for user-level multi-tenant isolation
+    println("execId: "+ jobExecuteResult.getExecID + ", taskId:" + jobExecuteResult.taskID)
+    
+    // 4. Get the execution status of the script
+    var jobInfoResult = client.getJobInfo(jobExecuteResult)
+    val sleepTimeMills: Int = 1000
+    while (!jobInfoResult.isCompleted) {
+      // 5. Get the execution progress of the script
+      val progress = client.progress(jobExecuteResult)
+      val progressInfo = if (progress.getProgressInfo != null) progress.getProgressInfo.toList else List.empty
+      println("progress: "+ progress.getProgress + ", progressInfo:" + progressInfo)
+      Utils.sleepQuietly(sleepTimeMills)
+      jobInfoResult = client.getJobInfo(jobExecuteResult)
+    }
+    if (!jobInfoResult.isSucceed) {
+      println("Failed to execute job: "+ jobInfoResult.getMessage)
+      throw new Exception(jobInfoResult.getMessage)
+    }
+
+    // 6. Get the job information of the script
+    val jobInfo = client.getJobInfo(jobExecuteResult)
+    // 7. Get the list of result sets (if the user submits multiple SQL at a time, multiple result sets will be generated)
+    val resultSetList = jobInfoResult.getResultSetList(client)
+    println("All result set list:")
+    resultSetList.foreach(println)
+    val oneResultSet = jobInfo.getResultSetList(client).head
+    // 8. Get a specific result set through a result set information
+    val fileContents = client.resultSet(ResultSetAction.builder().setPath(oneResultSet).setUser(jobExecuteResult.getUser).build()).getFileContent
+    println("First fileContents: ")
+    println(fileContents)
+  } catch {
+    case e: Exception => {
+      e.printStackTrace()
+    }
+  }
+  IOUtils.closeQuietly(client)
+}
+```
+
+## 3. Linkis1.0 new submit interface with Label support
+
+Linkis1.0 adds the client.submit method, which is used to adapt with the new task execution interface of 1.0, and supports the input of Label and other parameters
+
+### 3.1 Java Test Class
+
+```java
+package com.webank.wedatasphere.linkis.client.test;
+
+import com.webank.wedatasphere.linkis.common.utils.Utils;
+import com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy;
+import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfig;
+import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder;
+import com.webank.wedatasphere.linkis.manager.label.constant.LabelKeyConstant;
+import com.webank.wedatasphere.linkis.protocol.constants.TaskConstant;
+import com.webank.wedatasphere.linkis.ujes.client.UJESClient;
+import com.webank.wedatasphere.linkis.ujes.client.UJESClientImpl;
+import com.webank.wedatasphere.linkis.ujes.client.request.JobSubmitAction;
+import com.webank.wedatasphere.linkis.ujes.client.request.ResultSetAction;
+import com.webank.wedatasphere.linkis.ujes.client.response.JobExecuteResult;
+import com.webank.wedatasphere.linkis.ujes.client.response.JobInfoResult;
+import com.webank.wedatasphere.linkis.ujes.client.response.JobProgressResult;
+import org.apache.commons.io.IOUtils;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+public class JavaClientTest {
+
+    public static void main(String[] args){
+
+        String user = "hadoop";
+        String executeCode = "show tables";
+
+        // 1. Configure ClientBuilder and get ClientConfig
+        DWSClientConfig clientConfig = ((DWSClientConfigBuilder) (DWSClientConfigBuilder.newBuilder()
+                .addServerUrl("http://${ip}:${port}") //Specify ServerUrl, the address of the linkis server-side gateway, such as http://{ip}:{port}
+                .connectionTimeout(30000) //connectionTimeOut client connection timeout
+                .discoveryEnabled(false).discoveryFrequency(1, TimeUnit.MINUTES) //Whether to enable registration discovery, if enabled, the newly launched Gateway will be automatically discovered
+                .loadbalancerEnabled(true) // Whether to enable load balancing, if registration discovery is not enabled, load balancing is meaningless
+                .maxConnectionSize(5) //Specify the maximum number of connections, that is, the maximum number of concurrent
+                .retryEnabled(false).readTimeout(30000) //execution failed, whether to allow retry
+                .setAuthenticationStrategy(new StaticAuthenticationStrategy()) //AuthenticationStrategy Linkis authentication method
+                .setAuthTokenKey("${username}").setAuthTokenValue("${password}"))) //Authentication key, generally the user name; authentication value, generally the password corresponding to the user name
+                .setDWSVersion("v1").build(); //Linkis background protocol version, the current version is v1
+
+        // 2. Get a UJESClient through DWSClientConfig
+        UJESClient client = new UJESClientImpl(clientConfig);
+
+        try {
+            // 3. Start code execution
+            System.out.println("user: "+ user + ", code: [" + executeCode + "]");
+            Map<String, Object> startupMap = new HashMap<String, Object>();
+            // A variety of startup parameters can be stored in startupMap, see linkis management console configuration
+            startupMap.put("wds.linkis.yarnqueue", "q02");
+            //Specify Label
+            Map<String, Object> labels = new HashMap<String, Object>();
+            //Add the label that this execution depends on: EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel
+            labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "hive-1.2.1");
+            labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");
+            labels.put(LabelKeyConstant.ENGINE_RUN_TYPE_KEY, "hql");
+            //Specify source
+            Map<String, Object> source = new HashMap<String, Object>();
+            source.put(TaskConstant.SCRIPTPATH, "LinkisClient-test");
+            JobExecuteResult jobExecuteResult = client.submit( JobSubmitAction.builder()
+                    .addExecuteCode(executeCode)
+                    .setStartupParams(startupMap)
+                    .setUser(user)//Job submit user
+                    .addExecuteUser(user)//The actual execution user
+                    .setLabels(labels)
+                    .setSource(source)
+                    .build()
+            );
+            System.out.println("execId: "+ jobExecuteResult.getExecID() + ", taskId:" + jobExecuteResult.taskID());
+
+            // 4. Get the execution status of the script
+            JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
+            int sleepTimeMills = 1000;
+            while(!jobInfoResult.isCompleted()) {
+                // 5. Get the execution progress of the script
+                JobProgressResult progress = client.progress(jobExecuteResult);
+                Utils.sleepQuietly(sleepTimeMills);
+                jobInfoResult = client.getJobInfo(jobExecuteResult);
+            }
+
+            // 6. Get the job information of the script
+            JobInfoResult jobInfo = client.getJobInfo(jobExecuteResult);
+            // 7. Get the list of result sets (if the user submits multiple SQL at a time, multiple result sets will be generated)
+            String resultSet = jobInfo.getResultSetList(client)[0];
+            // 8. Get a specific result set through a result set information
+            Object fileContents = client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build()).getFileContent();
+            System.out.println("fileContents: "+ fileContents);
+
+        } catch (Exception e) {
+            e.printStackTrace();
+            IOUtils.closeQuietly(client);
+        }
+        IOUtils.closeQuietly(client);
+    }
+}
+
+```
+
+### 3.2 Scala Test Class
+
+```scala
+package com.webank.wedatasphere.linkis.client.test
+
+import java.util
+import java.util.concurrent.TimeUnit
+
+import com.webank.wedatasphere.linkis.common.utils.Utils
+import com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy
+import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder
+import com.webank.wedatasphere.linkis.manager.label.constant.LabelKeyConstant
+import com.webank.wedatasphere.linkis.protocol.constants.TaskConstant
+import com.webank.wedatasphere.linkis.ujes.client.UJESClient
+import com.webank.wedatasphere.linkis.ujes.client.request.{JobSubmitAction, ResultSetAction}
+import org.apache.commons.io.IOUtils
+
+
+object ScalaClientTest {
+
+  def main(args: Array[String]): Unit = {
+    val executeCode = "show tables"
+    val user = "hadoop"
+
+    // 1. Configure DWSClientBuilder, get a DWSClientConfig through DWSClientBuilder
+    val clientConfig = DWSClientConfigBuilder.newBuilder()
+      .addServerUrl("http://${ip}:${port}") //Specify ServerUrl, the address of the Linkis server-side gateway, such as http://{ip}:{port}
+      .connectionTimeout(30000) //connectionTimeOut client connection timeout
+      .discoveryEnabled(false).discoveryFrequency(1, TimeUnit.MINUTES) //Whether to enable registration discovery, if enabled, the newly launched Gateway will be automatically discovered
+      .loadbalancerEnabled(true) // Whether to enable load balancing, if registration discovery is not enabled, load balancing is meaningless
+      .maxConnectionSize(5) //Specify the maximum number of connections, that is, the maximum number of concurrent
+      .retryEnabled(false).readTimeout(30000) //execution failed, whether to allow retry
+      .setAuthenticationStrategy(new StaticAuthenticationStrategy()) //AuthenticationStrategy Linkis authentication method
+      .setAuthTokenKey("${username}").setAuthTokenValue("${password}") //Authentication key, generally the user name; authentication value, generally the password corresponding to the user name
+      .setDWSVersion("v1").build() //Linkis backend protocol version, the current version is v1
+
+    // 2. Get a UJESClient through DWSClientConfig
+    val client = UJESClient(clientConfig)
+
+    try {
+      // 3. Start code execution
+      println("user: "+ user + ", code: [" + executeCode + "]")
+      val startupMap = new java.util.HashMap[String, Any]()
+      startupMap.put("wds.linkis.yarnqueue", "q02") //Startup parameter configuration
+      //Specify Label
+      val labels: util.Map[String, Any] = new util.HashMap[String, Any]
+      //Add the label that this execution depends on, such as engineLabel
+      labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "hive-1.2.1")
+      labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE")
+      labels.put(LabelKeyConstant.ENGINE_RUN_TYPE_KEY, "hql")
+      //Specify source
+      val source: util.Map[String, Any] = new util.HashMap[String, Any]
+      source.put(TaskConstant.SCRIPTPATH, "LinkisClient-test")
+      val jobExecuteResult = client.submit(JobSubmitAction.builder
+          .addExecuteCode(executeCode)
+          .setStartupParams(startupMap)
+          .setUser(user) //Job submit user
+          .addExecuteUser(user) //The actual execution user
+          .setLabels(labels)
+          .setSource(source)
+          .build) //User, requesting user; used for user-level multi-tenant isolation
+      println("execId: "+ jobExecuteResult.getExecID + ", taskId:" + jobExecuteResult.taskID)
+
+      // 4. Get the execution status of the script
+      var jobInfoResult = client.getJobInfo(jobExecuteResult)
+      val sleepTimeMills: Int = 1000
+      while (!jobInfoResult.isCompleted) {
+        // 5. Get the execution progress of the script
+        val progress = client.progress(jobExecuteResult)
+        val progressInfo = if (progress.getProgressInfo != null) progress.getProgressInfo.toList else List.empty
+        println("progress: "+ progress.getProgress + ", progressInfo:" + progressInfo)
+        Utils.sleepQuietly(sleepTimeMills)
+        jobInfoResult = client.getJobInfo(jobExecuteResult)
+      }
+      if (!jobInfoResult.isSucceed) {
+        println("Failed to execute job: "+ jobInfoResult.getMessage)
+        throw new Exception(jobInfoResult.getMessage)
+      }
+
+      // 6. Get the job information of the script
+      val jobInfo = client.getJobInfo(jobExecuteResult)
+      // 7. Get the list of result sets (if the user submits multiple SQL at a time, multiple result sets will be generated)
+      val resultSetList = jobInfoResult.getResultSetList(client)
+      println("All result set list:")
+      resultSetList.foreach(println)
+      val oneResultSet = jobInfo.getResultSetList(client).head
+      // 8. Get a specific result set through a result set information
+      val fileContents = client.resultSet(ResultSetAction.builder().setPath(oneResultSet).setUser(jobExecuteResult.getUser).build()).getFileContent
+      println("First fileContents: ")
+      println(fileContents)
+    } catch {
+      case e: Exception => {
+        e.printStackTrace()
+      }
+    }
+    IOUtils.closeQuietly(client)
+  }
+
+}
+
+```
diff --git a/Linkis-Doc-master/en_US/User_Manual/LinkisCli_Usage_document.md b/Linkis-Doc-master/en_US/User_Manual/LinkisCli_Usage_document.md
new file mode 100644
index 0000000..0188013
--- /dev/null
+++ b/Linkis-Doc-master/en_US/User_Manual/LinkisCli_Usage_document.md
@@ -0,0 +1,191 @@
+Linkis-Cli usage documentation
+============
+
+## Introduction
+
+Linkis-Cli is a shell command line program used to submit tasks to Linkis.
+
+## Basic case
+
+You can simply submit a task to Linkis by referring to the example below
+
+The first step is to check whether the default configuration file `linkis-cli.properties` exists in the conf/ directory, and it contains the following configuration:
+
+```properties
+   wds.linkis.client.common.gatewayUrl=http://127.0.0.1:9001
+   wds.linkis.client.common.authStrategy=token
+   wds.linkis.client.common.tokenKey=Validation-Code
+   wds.linkis.client.common.tokenValue=BML-AUTH
+```
+
+The second step is to enter the linkis installation directory and enter the command:
+
+```bash
+    ./bin/linkis-client -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop 
+```
+
+In the third step, you will see the information on the console that the task has been submitted to linkis and started to execute.
+
+Linkis-cli currently only supports synchronous submission, that is, after submitting a task to linkis, it will continue to inquire about the task status and pull task logs until the task ends. If the status is successful at the end of the task, linkis-cli will also actively pull the result set and output it.
+
+
+## How to use
+
+```bash
+   ./bin/linkis-client [parameter] [cli parameter]
+```
+
+## Supported parameter list
+
+* cli parameters
+
+    | Parameter | Description | Data Type | Is Required |
+    | ----------- | -------------------------- | -------- |- --- |
+    | --gwUrl | Manually specify the linkis gateway address | String | No |
+    | --authStg | Specify authentication policy | String | No |
+    | --authKey | Specify authentication key | String | No |
+    | --authVal | Specify authentication value | String | No |
+    | --userConf | Specify the configuration file location | String | No |
+
+* Parameters
+
+    | Parameter | Description | Data Type | Is Required |
+    | ----------- | -------------------------- | -------- |- --- |
+    | -engType | Engine Type | String | Yes |
+    | -runType | Execution Type | String | Yes |
+    | -code | Execution code | String | No |
+    | -codePath | Local execution code file path | String | No |
+    | -smtUsr | Specify the submitting user | String | No |
+    | -pxyUsr | Specify the execution user | String | No |
+    | -creator | Specify creator | String | No |
+    | -scriptPath | scriptPath | String | No |
+    | -outPath | Path of output result set to file | String | No |
+    | -confMap | configuration map | Map | No |
+    | -varMap | variable map for variable substitution | Map | No |
+    | -labelMap | linkis labelMap | Map | No |
+    | -sourceMap | Specify linkis sourceMap | Map | No |
+
+
+## Detailed example
+
+#### One, add cli parameters
+
+Cli parameters can be passed in manually specified, this way will overwrite the conflicting configuration items in the default configuration file
+
+```bash
+    ./bin/linkis-client -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;" -submitUser hadoop -proxyUser hadoop --gwUrl http://127.0.0.1:9001- -authStg token --authKey [tokenKey] --authVal [tokenValue]
+```
+
+#### Two, add engine initial parameters
+
+The initial parameters of the engine can be added through the `-confMap` parameter. Note that the data type of the parameter is Map. The input format of the command line is as follows:
+
+        -confMap key1=val1,key2=val2,...
+        
+For example: the following example sets startup parameters such as the yarn queue for engine startup and the number of spark executors:
+
+```bash
+   ./bin/linkis-client -engineType spark-2.4.3 -codeType sql -confMap wds.linkis.yarnqueue=q02,spark.executor.instances=3 -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  
+```
+
+Of course, these parameters can also be read in a configuration file, we will talk about it later
+
+#### Three, add tags
+
+Labels can be added through the `-labelMap` parameter. Like the `-confMap`, the type of the `-labelMap` parameter is also Map:
+
+```bash
+   /bin/linkis-client -engineType spark-2.4.3 -codeType sql -labelMap labelKey=labelVal -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  
+```
+
+#### Fourth, variable replacement
+
+Linkis-cli variable substitution is realized by `${}` symbol and `-varMap`
+
+```bash
+   ./bin/linkis-client -engineType spark-2.4.3 -codeType sql -code "select count(*) from \${key};" -varMap key=testdb.test  -submitUser hadoop -proxyUser hadoop  
+```
+
+During execution, the sql statement will be replaced with:
+
+```mysql-sql
+   select count(*) from testdb.test
+```  
+        
+Note that the escape character in `'\$'` is to prevent the parameter from being parsed in advance by linux. If `-codePath` specifies the local script mode, the escape character is not required
+
+#### Five, use user configuration
+
+1. linkis-cli supports loading user-defined configuration files, the configuration file path is specified by the `--userConf` parameter, and the configuration file needs to be in the file format of `.properties`
+        
+```bash
+   ./bin/linkis-client -engineType spark-2.4.3 -codeType sql -code "select count(*) from testdb.test;"  -submitUser hadoop -proxyUser hadoop  --userConf [配置文件路径]
+``` 
+        
+        
+2. Which parameters can be configured?
+
+All parameters can be configured, for example:
+
+cli parameters:
+
+```properties
+   wds.linkis.client.common.gatewayUrl=http://127.0.0.1:9001
+   wds.linkis.client.common.authStrategy=static
+   wds.linkis.client.common.tokenKey=[tokenKey]
+   wds.linkis.client.common.tokenValue=[tokenValue]
+```
+
+parameter:
+
+```properties
+   wds.linkis.client.label.engineType=spark-2.4.3
+   wds.linkis.client.label.codeType=sql
+```
+        
+When the Map class parameters are configured, the format of the key is
+
+        [Map prefix] + [key]
+
+The Map prefix includes:
+
+ - ExecutionMap prefix: wds.linkis.client.exec
+ - sourceMap prefix: wds.linkis.client.source
+ - ConfigurationMap prefix: wds.linkis.client.param.conf
+ - runtimeMap prefix: wds.linkis.client.param.runtime
+ - labelMap prefix: wds.linkis.client.label
+        
+Note:
+
+1. variableMap does not support configuration
+
+2. When there is a conflict between the configured key and the key entered in the command parameter, the priority is as follows:
+
+        Instruction Parameters> Key in Instruction Map Type Parameters> User Configuration> Default Configuration
+        
+Example:
+
+Configure engine startup parameters:
+
+```properties
+   wds.linkis.client.param.conf.spark.executor.instances=3
+   wds.linkis.client.param.conf.wds.linkis.yarnqueue=q02
+```
+        
+Configure labelMap parameters:
+
+```properties
+   wds.linkis.client.label.myLabel=label123
+```
+        
+#### Six, output result set to file
+
+Use the `-outPath` parameter to specify an output directory, linkis-cli will output the result set to a file, and each result set will automatically create a file. The output format is as follows:
+
+        task-[taskId]-result-[idx].txt
+        
+E.g:
+
+        task-906-result-1.txt
+        task-906-result-2.txt
+        task-906-result-3.txt
\ No newline at end of file
diff --git a/Linkis-Doc-master/en_US/User_Manual/Linkis_Console_User_Manual.md b/Linkis-Doc-master/en_US/User_Manual/Linkis_Console_User_Manual.md
new file mode 100644
index 0000000..1d6704e
--- /dev/null
+++ b/Linkis-Doc-master/en_US/User_Manual/Linkis_Console_User_Manual.md
@@ -0,0 +1,120 @@
+Introduction to Computatoin Governance Console
+==============
+
+> Linkis1.0 has added a new Computatoin Governance Console page, which can provide users with an interactive UI interface for viewing the execution of Linkis tasks, custom parameter configuration, engine health status, resource surplus, etc, and then simplify user development and management efforts.
+
+Structure of Computatoin Governance Console
+==============
+
+> The Computatoin Governance Console is mainly composed of the following functional pages:
+
+-[Global History](#Global_History)
+
+-[Resource Management](#Resource_management)
+
+-[Parameter Configuration](#Parameter_Configuration)
+
+-[Global Variables](#Global_Variables)
+
+-[ECM Management](#ECM_management) (Only visible to linkis computing management console administrators)
+
+-[Microservice Management](#Microservice_management) (Only visible to linkis computing management console administrators)
+
+-[FAQ](#FAQ)
+
+> Global history, resource management, parameter configuration, and global variables are visible to all users, while ECM management and microservice management are only visible to linkis computing management console administrators.
+
+> The administrator of the Linkis computing management desk can configure through the following parameters in linkis.properties:
+
+> `` wds.linkis.governance.station.admin=hadoop (multiple administrator usernames are separated by ‘,’)''
+
+Introduction to the functions and use of Computatoin Governance Console
+========================
+
+Global history
+--------
+
+> ![](Images/Global History Interface.png)
+
+
+> The global history interface provides the user's own linkis task submission record. The execution status of each task can be displayed here, and the reason for the failure of task execution can also be queried by clicking the view button on the left side of the task
+
+> ![./media/image2.png](Images/Global History Query Button.png)
+
+
+> ![./media/image3.png](Images/task execution log of a single task.png)
+
+
+> For linkis computing management console administrators, the administrator can view the historical tasks of all users by clicking the switch administrator view on the page.
+
+> ![./media/image4.png](Images/Administrator View.png)
+
+
+Resource management
+--------
+
+> In the resource management interface, the user can see the status of the engine currently started and the status of resource occupation, and can also stop the engine through the page.
+
+> ![./media/image5.png](Images/Resource Management Interface.png)
+
+
+Parameter configuration
+--------
+
+> The parameter configuration interface provides the function of user-defined parameter management. The user can manage the related configuration of the engine in this interface, and the administrator can add application types and engines here.
+
+> ![./media/image6.png](Images/parameter configuration interface.png)
+
+
+> The user can expand all the configuration information in the directory by clicking on the application type at the top and then select the engine type in the application, modify the configuration information and click "Save" to take effect.
+
+> Edit catalog and new application types are only visible to the administrator. Click the edit button to delete the existing application and engine configuration (note! Deleting the application directly will delete all engine configurations under the application and cannot be restored), or add an engine, or click "New Application" to add a new application type.
+
+> ![./media/image7.png](Images/edit directory.png)
+
+
+> ![./media/image8.png](Images/New application type.png)
+
+
+Global variable
+--------
+
+> In the global variable interface, users can customize variables for code writing, just click the edit button to add parameters.
+
+> ![./media/image9.png](Images/Global Variable Interface.png)
+
+
+ECM management
+-------
+
+> The ECM management interface is used by the administrator to manage the ECM and all engines. This interface can view the status information of the ECM, modify the ECM label information, modify the ECM status information, and query all engine information under each ECM. And only the administrator can see, the administrator's configuration method can be viewed in the second chapter of this article.
+
+> ![./media/image10.png](Images/ECM management interface.png)
+
+
+> Click the edit button to edit the label information of the ECM (only part of the labels are allowed to be edited) and modify the status of the ECM.
+
+> ![./media/image11.png](Images/ECM editing interface.png)
+
+
+> Click the instance name of the ECM to view all engine information under the ECM.
+
+> ![](Images/Click the instance name to view engine information.png)
+
+> ![](All engine information under Images/ECM.png)
+
+> Similarly, you can stop the engine on this interface, and edit the label information of the engine.
+
+Microservice management
+----------
+
+> The microservice management interface can view all microservice information under Linkis, and this interface is only visible to the administrator. Linkis's own microservices can be viewed by clicking on the Eureka registration center. The microservices associated with linkis will be listed directly on this interface.
+
+> ![](Images/microservice management interface.png)
+
+> ![](Images/Eureka registration center.png)
+
+common problem
+--------
+
+> To be added.
diff --git a/Linkis-Doc-master/en_US/User_Manual/README.md b/Linkis-Doc-master/en_US/User_Manual/README.md
new file mode 100644
index 0000000..442a32a
--- /dev/null
+++ b/Linkis-Doc-master/en_US/User_Manual/README.md
@@ -0,0 +1,8 @@
+# Overview
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis considered the scalability of the access method at the beginning of the design. For different access scenarios, Linkis provides front-end access and SDK access. HTTP and WebSocket interfaces are also provided on the basis of front-end interfaces. If you are interested in accessing and using Linkis, you can refer to the following documents:
+
+- [How to use Links](How_To_Use_Linkis.md)
+- [Linkis Management Console User Manual](Linkis_Console_User_Manual.md)
+- [Linkis1.0 User Manual](Linkis1.0_User_Manual.md)
+- [Linkis-Cli Usage Document](LinkisCli_Usage_document.md)
diff --git "a/Linkis-Doc-master/zh_CN/API_Documentations/Linkis\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214RestAPI\346\226\207\346\241\243.md" "b/Linkis-Doc-master/zh_CN/API_Documentations/Linkis\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214RestAPI\346\226\207\346\241\243.md"
new file mode 100644
index 0000000..6e5493c
--- /dev/null
+++ "b/Linkis-Doc-master/zh_CN/API_Documentations/Linkis\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214RestAPI\346\226\207\346\241\243.md"
@@ -0,0 +1,171 @@
+# Linkis 任务提交执行Rest API文档
+
+- Linkis Restful接口的返回,都遵循以下的标准返回格式:
+
+```json
+{
+ "method": "",
+ "status": 0,
+ "message": "",
+ "data": {}
+}
+```
+
+**约定**:
+
+ - method:返回请求的Restful API URI,主要是 WebSocket 模式需要使用。
+ - status:返回状态信息,其中:-1表示没有登录,0表示成功,1表示错误,2表示验证失败,3表示没该接口的访问权限。
+ - data:返回具体的数据。
+ - message:返回请求的提示信息。如果status非0时,message返回的是错误信息,其中data有可能存在stack字段,返回具体的堆栈信息。 
+ 
+更多关于 Linkis Restful 接口的规范,请参考:[Linkis Restful 接口规范](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Development_Documents/Development_Specification/API.md)
+
+### 1).提交执行
+
+- 接口 `/api/rest_j/v1/entrance/execute`
+
+- 提交方式 `POST`
+
+```json
+{
+    "executeApplicationName": "hive", //引擎类型
+    "requestApplicationName": "dss", //客户端服务类型
+    "executionCode": "show tables",
+    "params": {"variable": {}, "configuration": {}},
+    "runType": "hql", //运行的脚本类型
+   "source": {"scriptPath":"file:///tmp/hadoop/1.hql"}
+}
+```
+
+- 接口 `/api/rest_j/v1/entrance/submit`
+
+- 提交方式 `POST`
+
+```json
+{
+    "executionContent": {"code": "show tables", "runType":  "sql"},
+    "params": {"variable": {}, "configuration": {}},
+    "source":  {"scriptPath": "file:///mnt/bdp/hadoop/1.hql"},
+    "labels": {
+        "engineType": "spark-2.4.3",
+        "userCreator": "hadoop-IDE"
+    }
+}
+```
+
+
+- 返回示例
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/execute",
+ "status": 0,
+ "message": "请求执行成功",
+ "data": {
+   "execID": "030418IDEhivebdpdwc010004:10087IDE_hadoop_21",
+   "taskID": "123"  
+ }
+}
+```
+
+- execID是用户任务提交到 Linkis 之后,为该任务生成的唯一标识执行ID,为 String 类型,这个ID只在任务运行时有用,类似PID的概念。ExecID 的设计为`(requestApplicationName长度)(executeAppName长度)(Instance长度)${requestApplicationName}${executeApplicationName}${entranceInstance信息ip+port}${requestApplicationName}_${umUser}_${index}`
+
+- taskID 是表示用户提交task的唯一ID,这个ID由数据库自增生成,为 Long 类型
+
+
+### 2).获取状态
+
+- 接口 `/api/rest_j/v1/entrance/${execID}/status`
+
+- 提交方式 `GET`
+
+- 返回示例
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/{execID}/status",
+ "status": 0,
+ "message": "获取状态成功",
+ "data": {
+   "execID": "${execID}",
+   "status": "Running"
+ }
+}
+```
+
+### 3).获取日志
+
+- 接口 `/api/rest_j/v1/entrance/${execID}/log?fromLine=${fromLine}&size=${size}`
+
+- 提交方式 `GET`
+
+- 请求参数fromLine是指从第几行开始获取,size是指该次请求获取几行日志
+
+- 返回示例,其中返回的fromLine需要作为下次请求该接口的参数
+
+```json
+{
+  "method": "/api/rest_j/v1/entrance/${execID}/log",
+  "status": 0,
+  "message": "返回日志信息",
+  "data": {
+    "execID": "${execID}",
+	"log": ["error日志","warn日志","info日志", "all日志"],
+	"fromLine": 56
+  }
+}
+```
+
+### 4).获取进度
+
+- 接口 `/api/rest_j/v1/entrance/${execID}/progress`
+
+- 提交方式 `GET`<br>
+
+- 返回示例
+
+```json
+{
+  "method": "/api/rest_j/v1/entrance/{execID}/progress",
+  "status": 0,
+  "message": "返回进度信息",
+  "data": {
+    "execID": "${execID}",
+	"progress": 0.2,
+	"progressInfo": [
+		{
+			"id": "job-1",
+			"succeedTasks": 2,
+			"failedTasks": 0,
+			"runningTasks": 5,
+			"totalTasks": 10
+		},
+		{
+			"id": "job-2",
+			"succeedTasks": 5,
+			"failedTasks": 0,
+			"runningTasks": 5,
+			"totalTasks": 10
+		}
+	]
+  }
+}
+```
+
+### 5).kill任务
+
+- 接口 `/api/rest_j/v1/entrance/${execID}/kill`
+
+- 提交方式 `GET`
+
+```json
+{
+ "method": "/api/rest_j/v1/entrance/{execID}/kill",
+ "status": 0,
+ "message": "OK",
+ "data": {
+   "execID":"${execID}"
+  }
+}
+```
+
diff --git a/Linkis-Doc-master/zh_CN/API_Documentations/Login_API.md b/Linkis-Doc-master/zh_CN/API_Documentations/Login_API.md
new file mode 100644
index 0000000..01c896f
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/API_Documentations/Login_API.md
@@ -0,0 +1,131 @@
+# 登录文档
+
+## 1.对接LDAP服务
+
+进入/conf目录,执行命令:
+
+```bash
+    vim linkis-mg-gateway.properties
+```    
+
+添加LDAP相关配置:
+```bash
+wds.linkis.ldap.proxy.url=ldap://127.0.0.1:389/ # 您的LDAP服务URL
+wds.linkis.ldap.proxy.baseDN=dc=webank,dc=com # 您的LDAP服务的配置    
+```    
+    
+## 2.如何打开测试模式,实现免登录
+
+进入/conf目录,执行命令:
+
+```bash
+     vim linkis-mg-gateway.properties
+```
+    
+    
+将测试模式打开,参数如下:
+
+```shell
+    wds.linkis.test.mode=true   # 打开测试模式
+    wds.linkis.test.user=hadoop  # 指定测试模式下,所有请求都代理给哪个用户
+```
+
+## 3.登录接口汇总
+
+我们提供以下几个与登录相关的接口:
+
+ - [登录](#1登录)
+
+ - [登出](#2登出)
+
+ - [心跳](#3心跳)
+ 
+
+## 4.接口详解
+
+- Linkis Restful接口的返回,都遵循以下的标准返回格式:
+
+```json
+{
+ "method": "",
+ "status": 0,
+ "message": "",
+ "data": {}
+}
+```
+
+**约定**:
+
+ - method:返回请求的Restful API URI,主要是 WebSocket 模式需要使用。
+ - status:返回状态信息,其中:-1表示没有登录,0表示成功,1表示错误,2表示验证失败,3表示没该接口的访问权限。
+ - data:返回具体的数据。
+ - message:返回请求的提示信息。如果status非0时,message返回的是错误信息,其中data有可能存在stack字段,返回具体的堆栈信息。 
+ 
+更多关于 Linkis Restful 接口的规范,请参考:[Linkis Restful 接口规范](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Development_Documents/Development_Specification/API.md)
+
+### 1).登录
+
+- 接口 `/api/rest_j/v1/user/login`
+
+- 提交方式 `POST`
+
+```json
+      {
+        "userName": "",
+        "password": ""
+      }
+```
+
+- 返回示例
+
+```json
+    {
+        "method": null,
+        "status": 0,
+        "message": "login successful(登录成功)!",
+        "data": {
+            "isAdmin": false,
+            "userName": ""
+        }
+     }
+```
+
+其中:
+
+ - isAdmin: Linkis只有admin用户和非admin用户,admin用户的唯一特权,就是支持在Linkis管理台查看所有用户的历史任务。
+
+### 2).登出
+
+- 接口 `/api/rest_j/v1/user/logout`
+
+- 提交方式 `POST`
+
+  无参数
+
+- 返回示例
+
+```json
+    {
+        "method": "/api/rest_j/v1/user/logout",
+        "status": 0,
+        "message": "退出登录成功!"
+    }
+```
+
+### 3).心跳
+
+- 接口 `/api/rest_j/v1/user/heartbeat`
+
+- 提交方式 `POST`
+
+  无参数
+
+- 返回示例
+
+```json
+    {
+         "method": "/api/rest_j/v1/user/heartbeat",
+         "status": 0,
+         "message": "维系心跳成功!"
+    }
+```
diff --git a/Linkis-Doc-master/zh_CN/API_Documentations/README.md b/Linkis-Doc-master/zh_CN/API_Documentations/README.md
new file mode 100644
index 0000000..9f952b6
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/API_Documentations/README.md
@@ -0,0 +1,8 @@
+## 1. 文档说明
+Linkis1.0 在Linkix0.x版本的基础上进行了重构优化,同时也兼容了0.x的接口,但是为了防止在使用1.0版本时存在兼容性问题,需要您仔细阅读以下文档:
+
+1. 使用Linkis1.0定制化开发时,需要使用到Linkis的权限认证接口,请仔细阅读 [登录API文档](Login_API.md)。
+
+2. Linkis1.0提供JDBC的接口,需要使用JDBC的方式接入Linkis,请仔细阅读[任务提交执行JDBC API文档](任务提交执行JDBC_API文档.md)。
+
+3. Linkis1.0提供了Rest接口,如果需要在Linkis的基础上开发上层应用,请仔细阅读[任务提交执行Rest API文档](Linkis任务提交执行RestAPI文档.md)。
\ No newline at end of file
diff --git "a/Linkis-Doc-master/zh_CN/API_Documentations/\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214JDBC_API\346\226\207\346\241\243.md" "b/Linkis-Doc-master/zh_CN/API_Documentations/\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214JDBC_API\346\226\207\346\241\243.md"
new file mode 100644
index 0000000..1e365be
--- /dev/null
+++ "b/Linkis-Doc-master/zh_CN/API_Documentations/\344\273\273\345\212\241\346\217\220\344\272\244\346\211\247\350\241\214JDBC_API\346\226\207\346\241\243.md"
@@ -0,0 +1,46 @@
+# 任务提交执行JDBC API文档
+
+### 一、引入依赖模块:
+第一种方式在pom里面依赖JDBC模块:
+```xml
+<dependency>
+    <groupId>com.webank.wedatasphere.linkis</groupId>
+    <artifactId>linkis-ujes-jdbc</artifactId>
+    <version>${linkis.version}</version>
+ </dependency>
+```
+**注意:** 该模块还没有deploy到中央仓库,需要在ujes/jdbc目录里面执行`mvn install -Dmaven.test.skip=true`进行本地安装。
+
+**第二种方式通过打包和编译:**
+1. 在Linkis项目中进入到ujes/jdbc目录然后在终端输入指令进行打包`mvn assembly:assembly -Dmaven.test.skip=true`
+该打包指令会跳过单元测试的运行和测试代码的编译,并将JDBC模块需要的依赖一并打包进Jar包之中。
+2. 打包完成后在JDBC的target目录下会生成两个Jar包,Jar包名称中包含dependencies字样的那个就是我们需要的Jar包
+
+### 二、建立测试类:
+建立Java的测试类LinkisClientImplTestJ,具体接口含义可以见注释:
+```java
+ public static void main(String[] args) throws SQLException, ClassNotFoundException {
+
+        //1. 加载驱动类:com.webank.wedatasphere.linkis.ujes.jdbc.UJESSQLDriver
+        Class.forName("com.webank.wedatasphere.linkis.ujes.jdbc.UJESSQLDriver");
+
+        //2. 获得连接:jdbc:linkis://gatewayIP:gatewayPort   帐号和密码对应前端的帐号密码
+        Connection connection =  DriverManager.getConnection("jdbc:linkis://127.0.0.1:9001","username","password");
+
+        //3. 创建statement 和执行查询
+        Statement st= connection.createStatement();
+        ResultSet rs=st.executeQuery("show tables");
+        //4.处理数据库的返回结果(使用ResultSet类)
+        while (rs.next()) {
+            ResultSetMetaData metaData = rs.getMetaData();
+            for (int i = 1; i <= metaData.getColumnCount(); i++) {
+                System.out.print(metaData.getColumnName(i) + ":" +metaData.getColumnTypeName(i)+": "+ rs.getObject(i) + "    ");
+            }
+            System.out.println();
+        }
+        //关闭资源
+        rs.close();
+        st.close();
+        connection.close();
+    }
+```
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/messagescheduler.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/messagescheduler.md
new file mode 100644
index 0000000..4ed47a9
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/messagescheduler.md
@@ -0,0 +1,15 @@
+# Linkis-Message-Scheduler
+## 1. 概述
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linkis-RPC可以实现微服务之间的通信,为了简化RPC的使用方式,Linkis提供Message-Scheduler模块,通过如@Receiver注解的方式的解析识别与调用,同时,也统一了RPC和Restful接口的使用方式,具有更好的可拓展性。
+## 2. 架构说明
+## 2.1. 架构设计图
+![模块设计图](./../../Images/Architecture/Commons/linkis-message-scheduler.png)
+## 2.2. 模块说明
+* ServiceParser:解析Service模块的(Object)对象,同时把@Receiver注解的方法封装到ServiceMethod对象中。
+* ServiceRegistry:注册对应的Service模块,将Service解析后的ServiceMethod存储在Map容器中。
+* ImplicitParser:将Implicit模块的对象进行解析,使用@Implicit标注的方法会被封装到ImplicitMethod对象中。
+* ImplicitRegistry:注册对应的Implicit模块,将解析后的ImplicitMethod存储在一个Map容器中。
+* Converter:启动扫描RequestMethod的非接口非抽象的子类,并存储在Map中,解析Restful并匹配相关的RequestProtocol。
+* Publisher:实现发布调度功能,在Registry中找出匹配RequestProtocol的ServiceMethod,并封装为Job进行提交调度。
+* Scheduler:调度实现,使用Linkis-Sceduler执行Job,返回MessageJob对象。
+* TxManager:完成事务管理,对Job执行进行事务管理,在Job执行结束后判断是否进行Commit或者Rollback。
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/rpc.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/rpc.md
new file mode 100644
index 0000000..c89c578
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Commons/rpc.md
@@ -0,0 +1,17 @@
+# Linkis-RPC
+## 1. 概述
+基于Feign的微服务之间HTTP接口的调用,只能满足简单的A微服务实例根据简单的规则随机选择B微服务之中的某个服务实例,而这个B微服务实例如果想异步回传信息给调用方,是根本无法实现的。
+同时,由于Feign只支持简单的服务选取规则,无法做到将请求转发给指定的微服务实例,无法做到将一个请求广播给接收方微服务的所有实例。
+
+## 2. 架构说明
+## 2.1. 架构设计图
+![Linkis RPC架构图](./../../Images/Architecture/Commons/linkis-rpc.png)
+## 2.2. 模块说明
+主要模块的功能介绍如下:
+* Eureka:服务注册中心,用户管理服务,服务发现。
+* Sender发送器:服务请求接口,发送端使用Sender向接收端请求服务。
+* Receiver接收器:服务请求接收相应接口,接收端通过该接口响应服务。
+* Interceptor拦截器:Sender发送器会将使用者的请求传递给拦截器。拦截器拦截请求,对请求做额外的功能性处理,分别是广播拦截器用于对请求广播操作、重试拦截器用于对失败请求重试处理、缓存拦截器用于简单不变的请求读取缓存处理、和提供默认实现的默认拦截器。
+* Decoder,Encoder:用于请求的编码和解码。
+* Feign:是一个http请求调用的轻量级框架,声明式WebService客户端程序,用于Linkis-RPC底层通信。
+* Listener:监听模块,主要用于监听广播请求。
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md
new file mode 100644
index 0000000..45389b1
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConn/README.md
@@ -0,0 +1,98 @@
+EngineConn架构设计
+==================
+
+EngineConn:引擎连接器,为其他微服务模块提供统一配置管理、上下文服务、物理库、数据源管理、微服务管理和历史任务查询等功能的模块。
+
+一、EngineConn架构图
+
+![EngineConn](../../../Images/Architecture/EngineConn/engineconn-01.png)
+
+二级模块介绍:
+==============
+
+linkis-computation-engineconn交互式引擎连接器
+---------------------------------------------
+
+提供交互式计算任务的能力。
+
+| 核心类               | 核心功能                                                   |
+|----------------------|------------------------------------------------------------|
+| EngineConnTask       | 定义了提交给EngineConn的交互式计算任务                     |
+| ComputationExecutor  | 定义了交互式Executor,具备状态查询、任务kill等交互式能力。 |
+| TaskExecutionService | 提供对交互式计算任务的管理功能                             |
+
+linkis-engineconn-common引擎连接器的通用模块
+--------------------------------------------
+
+1.  定义了引擎连接器中最基础的实体类和接口。EngineConn是用于创建一个底层计算存储引擎的连接会话Session,包含引擎与具体集群的会话信息,是与具体引擎通信的client。
+
+| 核心Service           | 核心功能                                                             |
+|-----------------------|----------------------------------------------------------------------|
+| EngineCreationContext | 包含了EngineConn在启动期间的上下文信息                               |
+| EngineConn            | 包含了EngineConn的具体信息,如类型、与层计算存储引擎的具体连接信息等 |
+| EngineExecution       | 提供Executor的创建逻辑                                               |
+| EngineConnHook        | 定义引擎启动各个阶段前后的操作                                       |
+
+linkis-engineconn-core引擎连接器的核心逻辑
+------------------------------------------
+
+定义了EngineConn的核心逻辑涉及的接口。
+
+| 核心类            | 核心功能                           |
+|-------------------|------------------------------------|
+| EngineConnManager | 提供创建、获取EngineConn的相关接口 |
+| ExecutorManager   | 提供创建、获取Executor的相关接口   |
+| ShutdownHook      | 定义引擎关闭阶段的操作             |
+
+linkis-engineconn-launch引擎连接器启动模块
+------------------------------------------
+
+定义了如何启动EngineConn的逻辑。
+
+| 核心类           | 核心功能                 |
+|------------------|--------------------------|
+| EngineConnServer | EngineConn微服务的启动类 |
+
+linkis-executor-core执行器的核心逻辑
+------------------------------------
+
+>   定义了执行器相关的核心类。执行器是真正的计算场景执行器,负责将用户代码提交给EngineConn。
+
+| 核心类                     | 核心功能                                                   |
+|----------------------------|------------------------------------------------------------|
+| Executor                   | 是实际的计算逻辑执行单元,并提供对引擎各种能力的顶层抽象。 |
+| EngineConnAsyncEvent       | 定义了EngineConn相关的异步事件                             |
+| EngineConnSyncEvent        | 定义了EngineConn相关的同步事件                             |
+| EngineConnAsyncListener    | 定义了EngineConn相关异步事件监听器                         |
+| EngineConnSyncListener     | 定义了EngineConn相关同步事件监听器                         |
+| EngineConnAsyncListenerBus | 定义了EngineConn异步事件的监听器总线                       |
+| EngineConnSyncListenerBus  | 定义了EngineConn同步事件的监听器总线                       |
+| ExecutorListenerBusContext | 定义了EngineConn事件监听器的上下文                         |
+| LabelService               | 提供标签上报功能                                           |
+| ManagerService             | 提供与LinkisManager进行信息传递的功能                      |
+
+linkis-callback-service回调逻辑
+-------------------------------
+
+| 核心类             | 核心功能                 |
+|--------------------|--------------------------|
+| EngineConnCallback | 定义EngineConn的回调逻辑 |
+
+linkis-accessible-executor能够被访问的执行器
+--------------------------------------------
+
+能够被访问的Executor。可以通过RPC请求与它交互,从而获取它的状态、负载、并发等基础指标Metrics数据。
+
+| 核心类                   | 核心功能                                        |
+|--------------------------|-------------------------------------------------|
+| LogCache                 | 提供日志缓存的功能                              |
+| AccessibleExecutor       | 能够被访问的Executor,可以通过RPC请求与它交互。 |
+| NodeHealthyInfoManager   | 管理Executor的健康信息                          |
+| NodeHeartbeatMsgManager  | 管理Executor的心跳信息                          |
+| NodeOverLoadInfoManager  | 管理Executor的负载信息                          |
+| Listener                 | 提供与Executor相关的事件以及对应的监听器定义    |
+| EngineConnTimedLock      | 定义Executor级别的锁                            |
+| AccessibleService        | 提供Executor的启停、状态获取功能                |
+| ExecutorHeartbeatService | 提供Executor的心跳相关功能                      |
+| LockService              | 提供锁管理功能                                  |
+| LogService               | 提供日志管理功能                                |
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM\346\236\266\346\236\204\345\233\276.png" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM\346\236\266\346\236\204\345\233\276.png"
new file mode 100644
index 0000000..cc83842
Binary files /dev/null and "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/ECM\346\236\266\346\236\204\345\233\276.png" differ
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/\345\210\233\345\273\272EngineConn\350\257\267\346\261\202\346\265\201\347\250\213.png" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/\345\210\233\345\273\272EngineConn\350\257\267\346\261\202\346\265\201\347\250\213.png"
new file mode 100644
index 0000000..303f37a
Binary files /dev/null and "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/Images/\345\210\233\345\273\272EngineConn\350\257\267\346\261\202\346\265\201\347\250\213.png" differ
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md
new file mode 100644
index 0000000..2fa0aef
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnManager/README.md
@@ -0,0 +1,49 @@
+EngineConnManager架构设计
+-------------------------
+
+EngineConnManager(ECM):EngineConn的管理器,提供引擎的生命周期管理,同时向RM汇报负载信息和自身的健康状况。
+
+### 一、ECM架构
+
+![](Images/ECM架构图.png)
+
+### 二、二级模块介绍
+
+**Linkis-engineconn-linux-launch**
+
+引擎启动器,核心类为LinuxProcessEngineConnLauch,用于提供执行命令的指令。
+
+**Linkis-engineconn-manager-core**
+
+ECM的核心模块,包含ECM健康上报、EngineConn健康上报功能的顶层接口,定义了ECM服务的相关指标,以及构造EngineConn进程的核心方法。
+
+| 核心顶层接口/类     | 核心功能                                 |
+|---------------------|------------------------------------------|
+| EngineConn          | 定义了EngineConn的属性,包含的方法和参数 |
+| EngineConnLaunch    | 定义了EngineConn的启动方法和停止方法     |
+| ECMEvent            | 定义了ECM相关事件                        |
+| ECMEventListener    | 定义了ECM相关事件监听器                  |
+| ECMEventListenerBus | 定义了ECM的监听器总线                    |
+| ECMMetrics          | 定义了ECM的指标信息                      |
+| ECMHealthReport     | 定义了ECM的健康上报信息                  |
+| NodeHealthReport    | 定义了节点的健康上报信息                 |
+
+**Linkis-engineconn-manager-server**
+
+ECM的服务端,定义了ECM健康信息处理服务、ECM指标信息处理服务、ECM注册服务、EngineConn启动服务、EngineConn停止服务、EngineConn回调服务等顶层接口和实现类,主要用于ECM对自己和EngineConn的生命周期管理以及健康信息上报、发送心跳等。
+
+模块中的核心Service和功能简介如下:
+
+| 核心service                     | 核心功能                                        |
+|---------------------------------|-------------------------------------------------|
+| EngineConnLaunchService         | 包含生成EngineConn和启动进程的核心方法          |
+| BmlResourceLocallizationService | 用于将BML的引擎相关资源下载并生成本地化文件目录 |
+| ECMHealthService                | 向AM定时上报自身的健康心跳                      |
+| ECMMetricsService               | 向AM定时上报自身的指标状况                      |
+| EngineConnKillSerivce           | 提供停止引擎的相关功能                          |
+| EngineConnListService           | 提供缓存和管理引擎的相关功能                    |
+| EngineConnCallBackService       | 提供回调引擎的功能                              |
+
+ECM构建EngineConn启动流程:
+
+![](Images/创建EngineConn请求流程.png)
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md
new file mode 100644
index 0000000..798f535
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/EngineConnPlugin/README.md
@@ -0,0 +1,71 @@
+EngineConnPlugin(ECP)架构设计
+===============================
+
+引擎连接器插件是一种能够动态加载引擎连接器并减少版本冲突发生的实现,具有方便扩展、快速刷新、选择加载的特性。为了能让开发用户自由扩展Linkis的Engine引擎,并动态加载引擎依赖避免版本冲突,设计研发了EngineConnPlugin,允许以实现既定的插件化接口的方式引入新引擎到计算中间件的执行生命周期里,
+插件化接口对引擎的定义做了拆解,包括参数初始化、分配引擎资源,构建引擎连接以及设定引擎默认标签。
+
+一、ECP架构图
+
+![](../../../Images/Architecture/linkis-engineConnPlugin-01.png)
+
+二级模块介绍:
+==============
+
+EngineConn-Plugin-Server
+------------------------
+
+引擎连接器插件服务是对外提供注册插件、管理插件,以及插件资源构建的入口服务。成功注册加载的引擎插件会包含资源分配和启动参数配置的逻辑,在引擎初始化过程中,EngineConn
+Manager等其他服务通过RPC请求调用Plugin Server里对应插件的逻辑。
+
+| 核心类                           | 核心功能                              |
+|----------------------------------|---------------------------------------|
+| EngineConnLaunchService          | 负责构建引擎连接器启动请求            |
+| EngineConnResourceFactoryService | 负责生成引擎资源                      |
+| EngineConnResourceService        | 负责从BML下载引擎连接器使用的资源文件 |
+
+
+EngineConn-Plugin-Loader 引擎连接器插件加载器
+---------------------------------------
+
+引擎连接器插件加载器是用来根据请求参数动态加载引擎连接器插件的加载器,并具有缓存的特性。具体加载流程主要由两部分组成:1)插件资源例如主程序包和程序依赖包等加载到本地(未开放)。2)插件资源从本地动态加载入服务进程环境中,例如通过类加载器加载入JVM虚拟机。
+
+| 核心类                          | 核心功能                                     |
+|---------------------------------|----------------------------------------------|
+| EngineConnPluginsResourceLoader | 加载引擎连接器插件资源                       |
+| EngineConnPluginsLoader         | 加载引擎连接器插件实例,或者从缓存加载已有的 |
+| EngineConnPluginClassLoader     | 动态从jar中实例化引擎连接器实例              |
+
+EngineConn-Plugin-Cache 引擎插件缓存模组
+----------------------------------------
+
+引擎连接器插件缓存是专门用来缓存已经加载的引擎连接器的缓存服务,并支持读取、更新、移除的能力。已经加载进服务进程的插件会被连同其类加载器一起缓存起来,避免多次加载影响效率;同时缓存模组会定时通知加载器去更新插件资源,如果发现有变动,会重新加载并自动刷新缓存。
+
+| 核心类                      | 核心功能                     |
+|-----------------------------|------------------------------|
+| EngineConnPluginCache       | 缓存已经加载的引擎连接器实例 |
+| RefreshPluginCacheContainer | 定时刷新缓存的引擎连接器     |
+
+EngineConn-Plugin-Core:引擎连接器插件核心模组
+---------------------------------------------
+
+引擎连接器插件核心模块是引擎连接器插件的核心模块。包含引擎插件基本功能实现,如引擎连接器启动命令构建,引擎资源工厂构建和引擎连接器插件核心接口实现。
+
+| 核心类                  | 核心功能                                                 |
+|-------------------------|----------------------------------------------------------|
+| EngineConnLaunchBuilder | 构建引擎连接器启动请求                                   |
+| EngineConnFactory       | 创建引擎连接器                                           |
+| EngineConnPlugin        | 引擎连接器插件实现接口,包括资源,命令,实例的构建方法。 |
+| EngineResourceFactory   | 引擎资源的创建工厂                                       |
+
+EngineConn-Plugins:引擎连接插件集合
+-----------------------------------
+
+引擎连接插件集合是用来放置已经基于我们定义的插件接口实现的默认引擎连接器插件库。提供了默认引擎连接器实现,如jdbc、spark、python、shell等。用户可以基于自己的需求参考已经实现的案例,实现更多的引擎连接器。
+
+| 核心类              | 核心功能         |
+|---------------------|------------------|
+| engineplugin-jdbc   | jdbc引擎连接器   |
+| engineplugin-shell  | shell引擎连接器  |
+| engineplugin-spark  | spark引擎连接器  |
+| engineplugin-python | python引擎连接器 |
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/Entrance/Entrance.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/Entrance/Entrance.md
new file mode 100644
index 0000000..38d3e56
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/Entrance/Entrance.md
@@ -0,0 +1,26 @@
+Entrance架构设计
+================
+
+Links任务提交入口是用来负责计算任务的接收、调度、转发执行请求、生命周期管理的服务,并且能把计算结果、日志、进度返回给调用方,是从Linkis0.X的Entrance拆分出来的原生能力。
+
+一、Entrance架构图
+
+![](../../../Images/Architecture/linkis-entrance-01.png)
+
+**二级模块介绍:**
+
+EntranceServer
+--------------
+
+EntranceServer计算任务提交入口服务是Entrance的核心服务,负责Linkis执行任务的接收、调度、执行状态跟踪、作业生命周期管理等。主要实现了把任务执行请求转成可调度的Job,调度、申请Executor执行,Job状态管理,结果集管理,日志管理等。
+
+| 核心类                  | 核心功能                                                                                                                                           |
+|-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------|
+| EntranceInterceptor     | Entrance拦截器用来对传入参数task进行信息的补充,使得这个task的内容更加完整, 补充的信息包括: 数据库信息补充、自定义变量替换、代码检查、limit限制等 |
+| EntranceParser          | Entrance解析器用来把请求参数Map解析成Task,也可以将Task转成可调度的Job,或者把Job转成可存储的Task。                                                  |
+| EntranceExecutorManager | Entrance执行器管理为EntranceJob的执行创建Executor,并维护Job和Executor的关系,且支持Job请求的标签能力                                               |
+| PersistenceManager      | 持久化管理负责作业相关的持久化操作,如结果集路径、作业状态变化、进度等存储到数据库。                                                               |
+| ResultSetEngine         | 结果集引擎负责作业运行后的结果集存储,以文件的形式保存到HDFS或者本地存储目录。                                                                     |
+| LogManager              | 日志管理负责作业日志的存储并对接日志错误码管理。                                                                                                   |
+| Scheduler               | 作业调度器负责所有Job的调度执行,主要通过调度作业队列实现。                                                                                        |
+|                         |                                                                                                                                                    |
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisClient/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisClient/README.md
new file mode 100644
index 0000000..7d36f0e
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisClient/README.md
@@ -0,0 +1,35 @@
+## Linkis-Client架构设计
+
+为用户提供向Linkis提交执行任务的轻量级客户端。
+
+#### Linkis-Client架构图
+
+![img](./../../../Images/Architecture/linkis-client-01.png)
+
+
+
+#### 二级模块介绍
+
+##### Linkis-Computation-Client
+
+以SDK的形式为用户提供向Linkis提交执行任务的接口。
+
+| 核心类     | 核心功能                                         |
+| ---------- | ------------------------------------------------ |
+| Action     | 定义了请求的属性,包含的方法和参数               |
+| Result     | 定义了返回结果的属性,包含的方法和参数           |
+| UJESClient | 负责请求的提交,执行,状态、结果和相关参数的获取 |
+
+ 
+
+#####  Linkis-Cli
+
+以shell命令端的形式为用户提供向Linkis提交执行任务的方式。
+
+| 核心类      | 核心功能                                                     |
+| ----------- | ------------------------------------------------------------ |
+| Common      | 定义了指令模板父类、指令解析实体类、任务提交执行各环节的父类和接口 |
+| Core        | 负责解析输入、任务执行和定义输出方式                         |
+| Application | 调用linkis-computation-client执行任务,并实时拉取日志和最终结果 |
+
+ 
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md
new file mode 100644
index 0000000..c8fba23
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/AppManager.md
@@ -0,0 +1,45 @@
+## 背景
+针对旧版本Linkis的Entrance模块负责太多的职责,对Engine的管理能力较弱,且不易于后续的扩展,新抽出了AppManager模块,完成
+以下职责:
+1. 新增AM模块将Entrance之前做的管理Engine的功能移动到AM模块
+2. AM需要支持操作Engine,包括:新增、复用、回收、预热、切换等功能
+3. 需要对接Manager模块对外提供Engine的管理功能:包括Engine状态维护、引擎列表维护、引擎信息等
+4. AM需要管理EM服务,需要完成EM的注册并将资源注册转发给RM进行EM的资源注册
+5. AM需要对接Label模块,包括EM/Engine的增删需要通知标签管理器进行标签更新
+6. AM另外需要对接标签模块进行标签解析,并需要通过一系列标签获取一些列打好分的serverInstance列表(EM和Engine怎么区分,1、标签完全不一样)
+7. 需要对外提供基础接口:包括引擎和引擎管理器的增删改,提供metric查询等
+
+## 架构图
+
+![](../../../Images/Architecture/AppManager-03.png)
+
+如上图所示:AM在LinkisMaster中属于AppManager模块,作为一个Service提供服务
+
+新引擎申请流程图:
+![](../../../Images/Architecture/AppManager-02.png)
+
+
+从上面的引擎生命周期流程图可知,Entrance已经不在做Engine的管理工作,engine的启动和管理都由AM控制。
+
+## 架构说明:
+
+AppManager主要包含了引擎服务和EM服务:
+引擎服务包含了所有和引擎EngineConn相关的操作,如引擎创建、引擎复用、引擎切换、引擎回收、引擎停止、引擎销毁等。
+EM服务负责所有EngineConnManager的信息管理,可以在线上对ECM进行服务管理,包括标签修改,暂停ECM服务,获取ECM实例信息,获取ECM运行的引擎信息,kill掉ECM操作,还可以根据EM Node的信息查询所有的EngineNode,也支持按用户查找,保存了EM Node的负载信息、节点健康信息、资源使用信息等。
+新的EngineConnManager和EngineConn都支持标签管理,引擎的类型也增加了离线、流式、交互式支持。
+
+引擎创建:专门负责LinkisManager服务的新建引擎功能,引擎启动模块完全负责一个新引擎的创建,包括获取ECM标签集合、资源申请、获得引擎启动命令,通知ECM新建引擎,更新引擎列表等。
+CreateEngienRequest->RPC/Rest -> MasterEventHandler ->CreateEngineService ->
+->LabelContext/EnginePlugin/RMResourcevice->(RcycleEngineService)EngineNodeManager->EMNodeManager->sender.ask(EngineLaunchRequest)->EngineManager服务->EngineNodeManager->EngineLocker->Engine->EngineNodeManager->EngineFactory=&gt;EngineService=&gt;ServerInstance
+在创建引擎是存在和RM交互的部分,EnginePlugin应该需要通过Lables返回具体的资源类型,然后AM向RM发送资源请求
+
+引擎复用:为了减少引擎启动所耗费的时间和资源,引擎使用必须优先考虑复用原则,复用一般是指复用用户已经创建好的引擎,引擎复用模块负责提供可复用引擎集合,选举并锁定引擎后开始使用,或者返回没有可以复用的引擎。
+ReuseEngienRequest->RPC/Rest -> MasterEventHandler ->ReuseEngineService ->
+->abelContext->EngineNodeManager->EngineSelector->EngineLocker->Engine->EngineNodeManager->EngineReuser->EngineService=&gt;ServerInstance
+
+引擎切换:主要是指对已有引擎进行标签切换,例如创建引擎的时候是由Creator1创建的,现在可以通过引擎切换改成Creator2。这个时候就可以允许当前引擎接收标签为Creator2的任务了。
+SwitchEngienRequest->RPC/Rest -> MasterEventHandler ->SwitchEngineService ->LabelContext/EnginePlugin/RMResourcevice->EngineNodeManager->EngineLocker->Engine->EngineNodeManager->EngineReuser->EngineService=&gt;ServerInstance
+
+引擎管理器:引擎管理负责管理所有引擎的基本信息、元数据信息
+
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md
new file mode 100644
index 0000000..7c21f08
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/LabelManager.md
@@ -0,0 +1,40 @@
+## LabelManager 架构设计
+
+#### 简述
+LabelManager是Linkis中对上层应用提供标签服务的功能模组,运用标签技术管理集群资源分配、服务节点选举、用户权限匹配以及网关路由转发;包含支持各种自定义Label标签的泛化解析处理工具,以及通用的标签匹配评分器。
+
+### 整体架构示意
+
+![整体架构示意图](../../../Images/Architecture/LabelManager/label_manager_global.png)  
+
+#### 架构说明
+- LabelBuilder: 承担着标签解析的工作,从输入的标签类型、关键字或者字符数值中解析得到具体的标签实体,有默认的泛化实现类也可做自定义扩展。
+- LabelEntities: 指代标签实体集合,有且包含集群标签,配置标签,引擎标签,节点标签,路由标签,搜索标签等。
+- NodeLabelService: 实例/节点与标签的关联服务接口类,定义对两者关联关系的增删改查以及根据标签匹配实例/节点的接口方法。
+- UserLabelService: 声明用户与标签的关联操作。
+- ResourceLabelService: 声明集群资源与标签的关联操作,涉及到对组合标签的资源管理,清理或设置标签关联的资源数值。
+- NodeLabelScorer: 节点标签评分器,对应不同的标签匹配算法的实现,使用评分表示节点的标签匹配度。
+
+### 一. LabelBuilder解析流程
+以泛化标签解析类GenericLabelBuilder为例,阐明整体流程:  
+![泛化标签解析流程](../../../Images/Architecture/LabelManager/label_manager_builder.png)  
+标签解析/构建的流程概括包含几步:  
+1. 根据输入选择要构建解析的合适标签类。
+2. 根据标签类的定义信息,递归解析泛型结构,得到具体的标签值类型。
+3. 转化输入值对象到标签值类型,运用隐式转化或正反解析框架。
+4. 根据1-3的返回,实例化标签,并根据不同的标签类进行一些后置操作。
+
+### 二. NodeLabelScorer打分流程
+为了根据Linkis用户执行请求中附带的标签列表挑选合适的引擎节点,需要对符合的引擎列表做择优,量化为引擎节点的标签匹配度即评分。  
+在标签定义里,每个标签都有feature特征值,分别为CORE,SUITABLE,PRIORITIZED,OPTIONAL,每个特征值都有一个boost值,相当于权重和激励值,
+同时有些特征例CORE和SUITABLE为必须唯一特征即在匹配过程中需做强过滤,且一个节点只能分别关联一个CORE/SUITABLE标签。  
+根据现有标签,节点,请求附带标签三者之间的关系,可以绘制出如下示意图:  
+![标签打分](../../../Images/Architecture/LabelManager/label_manager_scorer.png)  
+
+自带的默认评分逻辑过程应大体包含以下几点步骤:  
+1. 方法的输入应该为两组网络关系列表,分别是`Label -> Node` 和 `Node -> Label`, 其中`Node -> Label`关系里的Node节点必须具有请求里涉及到所有CORE以及SUITABLE特征的标签,这些节点也称为备选节点。
+2. 第一步遍历计算`Node -> Label`关系列表,遍历每个节点关联的标签Label,这一步先给标签打分,如果标签不是请求中附带的标签,打分为0,
+否则打分为: (基本分/该标签对应特征值在请求中的出现次数) * 对应特征值的激励值,其中基本分默认为1,节点的初始分为相关联的标签打分的总和;其中因为CORE/SUITABLE类型标签为必须唯一标签,出现次数恒定为1。
+3. 得到节点的初始分后,第二步遍历计算`Label -> Node`关系,由于第一步中忽略了非请求附带标签对评分的作用,但无关标签比重确实会对评分造成影响,对应这类的标签统一打上UNKNOWN的特征,同样该特征也有相对应的激励值;
+我们设定无关标签关联的备选节点占总关联节点的比重越高,对评分的影响越显著,以此可以对第一步得出的节点初始分做进一步累加。
+4. 对得到的备选节点的分数做标准差归一化,并排序。
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md
new file mode 100644
index 0000000..8670a45
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/README.md
@@ -0,0 +1,74 @@
+LinkisManager架构设计
+====================
+
+LinkisManager作为Linkis的一个独立微服务,对外提供了AppManager(应用管理)、ResourceManager(资源管理)、LabelManager(标签管理)的能力,能够支持多活部署,具备高可用、易扩展的特性。
+
+## 一. 架构图
+
+![01](../../../Images/Architecture/LinkisManager/LinkisManager-01.png)
+
+### 名词解释
+- EngineConnManager(ECM): 引擎管理器,用于启动和管理引擎
+- EngineConn(EC):引擎连接器,用于连接底层计算引擎
+- ResourceManager(RM):资源管理器,用于管理节点资源
+
+## 二. 二级模块介绍
+
+### 1. 应用管理模块 linkis-application-manager
+
+AppManager用于引擎的统一调度和管理
+
+| 核心接口/类 | 主要功能 |
+|------------|--------|
+|EMInfoService | 定义了EngineConnManager信息查询、修改功能 |
+|EMRegisterService| 定义了EngineConnManager注册功能 |
+|EMEngineService | 定义了EngineConnManager对EngineConn的创建、查询、关闭功能 |
+|EngineAskEngineService | 定义了查询EngineConn的功能 |
+|EngineConnStatusCallbackService | 定义了处理EngineConn状态回调的功能 |
+|EngineCreateService | 定义了创建EngineConn的功能 |
+|EngineInfoService | 定义了EngineConn查询功能 |
+|EngineKillService | 定义了EngineConn的停止功能 |
+|EngineRecycleService | 定义了EngineConn的回收功能 |
+|EngineReuseService | 定义了EngineConn的复用功能 |
+|EngineStopService | 定义了EngineConn的自毁功能 |
+|EngineSwitchService | 定义了引擎切换功能 |
+|AMHeartbeatService | 提供了EngineConnManager和EngineConn节点心跳处理功能 |
+
+
+通过AppManager申请引擎流程如下:
+![](../../../Images/Architecture/LinkisManager/AppManager-01.png)
+
+  
+### 2. 标签管理模块 linkis-label-manager
+
+LabelManager提供标签管理和解析能力
+
+| 核心接口/类 | 主要功能 |
+|------------|--------|
+|LabelService | 提供了标签增删改查功能 |
+|ResourceLabelService | 提供了资源标签管理功能 |
+|UserLabelService | 提供了用户标签管理功能 |
+
+LabelManager架构图如下:
+![](../../../Images/Architecture/LinkisManager/LabelManager-01.png)
+
+
+
+### 3. 资源管理模块 linkis-resource-manager
+
+ResourceManager用于管理引擎和队列的所有资源分配
+
+| 核心接口/类 | 主要功能 |
+|------------|--------|
+|RequestResourceService | 提供了EngineConn资源申请功能 |
+|ResourceManagerService | 提供了EngineConn资源释放功能 |
+|LabelResourceService | 提供了标签对应资源管理功能 |
+
+
+ResourceManager架构图如下:
+
+![](../../../Images/Architecture/LinkisManager/ResourceManager-01.png)
+
+### 4. 监控模块 linkis-manager-monitor
+
+Monitor提供了节点状态监控的功能
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md
new file mode 100644
index 0000000..1c7bb99
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/LinkisManager/ResourceManager.md
@@ -0,0 +1,145 @@
+ResourceManager(简称RM),是Linkis的计算资源管理模块,所有的EngineConn(简称EC)、EngineConnManager(简称ECM),甚至包括Yarn在内的外部资源,都由RM负责统筹管理。RM能够基于用户、ECM或其它通过复杂标签定义的粒度对资源进行管控。
+
+### RM在Linkis中的作用
+![01](../../../Images/Architecture/rm-01.png)
+![02](../../../Images/Architecture/rm-02.png)
+RM作为Linkis
+Manager的一部分,主要作用为:维护ECM上报的可用资源信息,处理ECM提出的资源申请,记录成功申请后,EC在生命周期内实时上报的实际资源使用信息,并提供查询当前资源使用情况的相关接口。
+
+Linkis中,与RM产生交互的其它服务主要有:
+
+1.  引擎管理器,简称ECM:处理启动引擎连接器请求的微服务。ECM作为资源的提供者,负责向RM注册资源(register)和下线资源(unregister)。同时,ECM作为引擎的管理者,负责代替准备启动的新引擎连接器向RM申请资源。每一个ECM实例,均在RM中有一条对应的资源记录,包含它提供的总资源、保护资源等信息,并动态更新已使用资源。
+![03](../../../Images/Architecture/rm-03.png)
+2.  引擎连接器,简称EC,是用户作业的实际执行单元。同时,EC作为资源的实际使用者,负责向RM上报实际使用资源。每一个EC,均在RM中有一条对应的资源记录:在启动过程中,体现为锁定资源;在运行过程中,体现为已使用资源;在被结束之后,该资源记录随之被删除。
+![04](../../../Images/Architecture/rm-04.png)
+### 资源的类型与格式
+![05](../../../Images/Architecture/rm-05.png)
+如上图所示,所有的资源类均实现一个顶层的Resource接口,该接口定义了所有资源类均需要支持的计算和比较的方法,并进行相应的数学运算符的重载,使得资源之间能够像数字一样直接被计算和比较。
+
+| 运算符 | 对应方法    | 运算符 | 对应方法    |
+|--------|-------------|--------|-------------|
+| \+     | add         | \>     | moreThan    |
+| \-     | minus       | \<     | lessThan    |
+| \*     | multiply    | =      | equals      |
+| /      | divide      | \>=    | notLessThan |
+| \<=    | notMoreThan |        |             |
+
+当前支持的资源类型如下表所示,所有的资源都有对应的json序列化与反序列化方法,能够通过json格式进行存储和在网络间传递:
+
+| 资源类型              | 描述                                                   |
+|-----------------------|--------------------------------------------------------|
+| MemoryResource        | 内存资源                                               |
+| CPUResource           | CPU资源                                                |
+| LoadResource          | 同时具备内存与CPU的资源                                |
+| YarnResource          | Yarn队列资源(队列,队列内存,队列CPU,队列实例数)    |
+| LoadInstanceResource  | 服务器资源(内存,CPU,实例数)                        |
+| DriverAndYarnResource | 驱动器与执行器资源(同时具备服务器资源,Yarn队列资源) |
+| SpecialResource       | 其它自定义资源                                         |
+
+### 可用资源管理
+
+RM中的可用资源,主要有两个来源:ECM上报的可用资源,以及Configuration模块中根据标签配置的资源限制。  
+**ECM资源上报**:
+
+1.  ECM启动时,会广播ECM注册的消息,RM接收到消息后,根据消息中包含的内容进行资源注册,资源相关的内容包括:
+
+    1.  总资源:该ECM能够提供的资源总数。
+
+    2.  保护资源:当剩余资源小于该资源时,不再允许继续分配资源。
+
+    3.  资源类型:如LoadResource,DriverAndYarnResource等类型名称。
+
+    4.  实例信息:机器名加端口名。
+
+2.  RM在收到资源注册请求后,在资源表中新增一条记录,内容与接口的参数信息一致,并通过实例信息找到代表该ECM的标签,在资源、标签关联表中新增一条关联记录。
+
+3.  ECM在关闭时,会广播ECM关闭的消息,RM接收到消息后,根据消息中的ECM实例信息来进行资源的下线,即删除该ECM实例标签对应的资源和关联记录。
+
+**Configuration模块标签资源配置**:
+
+用户能够在Configuration模块中,根据不同的标签组合进行资源数量限制的配置,如限制User/Creator/EngineType组合的最大可用资源。
+
+RM通过RPC消息,以组合标签为查询条件,向Configuration模块查询资源信息,并转换成Resource对象参与后续的比较和记录。
+
+
+### 资源使用管理
+
+**接收用户的资源申请。**
+
+1.  LinkisManager在收到启动EngineConn的请求时,会调用RM的资源申请接口,进行资源申请。资源申请接口接受一个可选的时间参数,当申请资源的等待时间超出该时间参数的限制时,该资源申请将自动作为失败处理。
+
+**判断是否有足够的资源**
+
+即为判断剩余可用资源是否大于申请资源,如果大于或等于,则资源充足;否则资源不充足。
+
+1.  RM预处理资源申请中附带的标签信息,根据规则将原始的标签进行过滤、组合和转换等操作(如将User/Creator标签和EngineType标签进行组合),这使得后续的资源判断的粒度更加灵活多变。
+
+2.  在每个转换后的标签上逐一加锁,使得它们所对应的资源记录在资源申请的处理期间保持不变。
+
+3.  根据每个标签:
+
+    1.  通过Persistence模块从数据库中查询对应的资源记录,如果该记录包含剩余可用资源,则直接用来比较。
+
+    2.  如果没有直接的剩余可用资源记录,则通过[剩余可用资源=最大可用资源-已用资源-已锁定资源-保护资源]公式进行计算得出。
+
+    3.  如果没有最大可用资源记录,则请求Configuration模块,看是否有配置的资源信息,如果有则使用到公式中进行计算,如果没有则跳过针对这个标签的资源判断。
+
+    4.  如果没有任何资源记录,则跳过针对这个标签的资源判断。
+
+4.  只要有一个标签被判断为资源不充足,则资源申请失败,对每个标签逐一解锁。
+
+5.  只有所有标签都判断为资源充足的情况下,才成功通过资源申请,进入下一步。
+
+**锁定申请通过的资源**
+
+1.  根据申请通过的资源数量,在资源表中生成一条新的记录,并与每个标签进行关联。
+
+2.  如果对应的标签有剩余可用资源记录,则扣减对应的数量。
+
+3.  生成一个定时任务,在一定时间后检查这批锁定的资源是否被实际使用,如果超时未使用,则强制回收。
+
+4.  对每个标签进行解锁。
+
+**上报实际使用资源**
+
+1.  EngineConn启动后,广播资源使用消息。RM收到消息后,检查该EngineConn对应的标签是否有锁定资源记录,如果没有,则报错。
+
+2.  如果有锁定资源,则对该EngineConn有关联的所有标签进行加锁。
+
+3.  对每个标签,将对应的锁定资源记录转换为已使用资源记录。
+
+4.  解锁所有标签。
+
+**释放实际使用资源**
+
+1.  EngineConn结束生命周期后,广播资源回收消息。RM收到消息后,检查该EngineConn对应的标签是否有已使用资源记录。
+
+2.  如果有,则对该EngineConn有关联的所有标签进行加锁。
+
+3.  对每个标签,在已使用资源记录中减去对应的数量。
+
+4.  如果对应的标签有剩余可用资源记录,则增加对应的数量。
+
+5.  对每个标签解锁
+
+
+### 外部资源管理
+
+在RM中,为了更加灵活并有拓展性对资源进行分类,支持多集群的资源管控的同时,使得接入新的外部资源更加便利,在设计上进行了以下几点的考虑:
+
+1.  通过标签来对资源进行统一管理。资源注册后,与标签进行关联,使得资源的属性能够无限拓展。同时,资源申请也都带上标签,实现灵活的匹配。
+
+2.  将集群抽象成一个或多个标签,并在外部资源管理模块中维护每个集群标签对应的环境信息,实现动态的对接。
+
+3.  抽象出通用的外部资源管理模块,如需接入新的外部资源类型,只要实现固定的接口,即可将不同类型的资源信息转换为RM中的Resource实体,实现统一管理。
+![06](../../../Images/Architecture/rm-06.png)
+RM的其它模块,通过ExternalResourceService提供的接口来进行外部资源信息的获取。
+
+而ExternalResourceService通过资源类型和标签来获取外部资源的信息:
+
+1.  所有外部资源的类型、标签、配置等属性(如集群名称、Yarn的web
+    url、Hadoop版本等信息),都维护在linkis\_external\_resource\_provider表中。
+
+2.  针对每种资源类型,均有一个ExternalResourceProviderParser接口的实现,将外部资源的属性进行解析,将能够匹配到Label的信息转换成对应的Label,将能够作为参数去请求资源接口的都转换成params。最后构建成一个能够作为外部资源信息查询依据的ExternalResourceProvider实例。
+
+3.  根据ExternalResourceService方法的参数中的资源类型和标签信息,找到匹配的ExternalResourceProvider,根据其中的信息生成ExternalResourceRequest,正式调用外部资源提供的API,发起资源信息请求。
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/README.md
new file mode 100644
index 0000000..76ab242
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Computation_Governance_Services/README.md
@@ -0,0 +1,66 @@
+## **背景**
+
+**Linkis0.X的架构主要存在以下问题**
+
+1.核心处理流程和层级模块边界模糊
+
+-   Entrance 和 EngineManager 功能边界模糊
+
+-   任务提交执行主流程不够清晰
+
+-   扩展新引擎较麻烦,需要实现多个模块的代码
+
+-   只支持计算请求场景,存储请求场景和常驻服务模式(Cluster)难以支持
+
+2.更丰富强大计算治理功能需求
+
+-   计算任务管理策略支持度不够
+
+-   标签能力不够强大,制约计算策略和资源管理
+
+Linkis1.0计算治理服务的新架构可以很好的解决这些问题。
+
+## **架构图**
+![](../../Images/Architecture/linkis-computation-gov-01.png)
+
+**作业流程优化:**
+Linkis1.0将优化Job的整体执行流程,从提交 —\> 准备 —\>
+执行三个阶段,来全面升级Linkis的Job执行架构,如下图所示:
+
+![](../../Images/Architecture/linkis-computation-gov-02.png)
+
+## **架构说明**
+
+### 1、Entrance
+
+ Entrance作为计算类型任务的提交入口,提供任务的接收、调度和Job信息的转发能力,是从Linkis0.X的Entrance拆分出来的原生能力;
+ 
+ [进入Entrance架构设计](./Entrance/Entrance.md)
+
+### 2、Orchestrator
+
+ Orchestrator 作为准备阶段的入口,从 Linkis0.X 的 Entrance 继承了解析Job、申请Engine和提交执行的能力;同时,Orchestrator将提供强大的编排和计算策略能力,满足多活、主备、事务、重放、限流、异构和混算等多种应用场景的需求。
+ 
+ [进入Orchestrator架构设计](../Orchestrator/README.md)
+
+### 3、LinkisManager
+
+ LinkisManager作为Linkis的管理大脑,主要由AppManager、ResourceManager、LabelManager和EngineConnPlugin组成。
+ 
+ 1. ResourceManager 不仅具备 Linkis0.X 对 Yarn 和 Linkis EngineManager的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让ResourceManager具备跨集群、跨计算资源类型的全资源管理能力;
+ 2. AppManager 将统筹管理所有的 EngineConnManager 和 EngineConn,EngineConn 的申请、复用、创建、切换、销毁等生命周期全交予 AppManager 进行管理;而 LabelManager 将基于多级组合标签,提供跨IDC、跨集群的 EngineConn 和 EngineConnManager 路由和管控能力;
+ 3. EngineConnPlugin 主要用于降低新计算存储的接入成本,真正做到让用户只需要实现一个类,就能接入一个全新的计算存储引擎。
+
+ [进入LinkisManager架构设计](./LinkisManager/README.md)
+
+### 4、EngineConnManager
+
+ EngineConnManager (简称ECM)是 Linkis0.X EngineManager 的精简升级版。Linkis1.0下的ECM去除了引擎的申请能力,整个微服务完全无状态,将聚焦于支持各类 EngineConn 的启动和销毁。
+ 
+ [进入EngineConnManager架构设计](./EngineConnManager/README.md)
+
+### 5、EngineConn
+
+EngineConn 是 Linkis0.X Engine 的优化升级版本,将提供 EngineConn 和 Executor 两大模块,其中 EngineConn 用于连接底层的计算存储引擎,提供一个打通了底层各计算存储引擎的 Session 会话;Executor 则基于这个 Session 会话,提供交互式计算、流式计算、离线计算、数据存储的全栈计算能力支持。
+
+[进入EngineConn架构设计](./EngineConn/README.md)
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/EngineConn\346\226\260\345\242\236\346\265\201\347\250\213.md" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/EngineConn\346\226\260\345\242\236\346\265\201\347\250\213.md"
new file mode 100644
index 0000000..7be886a
--- /dev/null
+++ "b/Linkis-Doc-master/zh_CN/Architecture_Documents/EngineConn\346\226\260\345\242\236\346\265\201\347\250\213.md"
@@ -0,0 +1,111 @@
+# EngineConn新增流程
+
+EngineConn的新增,是Linkis计算治理的计算任务准备阶段的核心流程之一。它主要包括了Client端(Entrance或用户客户端)向LinkisManager发起一个新增EngineConn的请求,LinkisManager为用户按需、按标签规则,向EngineConnManager发起一个启动EngineConn的请求,并等待EngineConn启动完成后,将可用的EngineConn返回给Client的整个流程。
+
+如下图所示,接下来我们来详细说明一下整个流程:
+
+![EngineConn新增流程](../Images/Architecture/EngineConn新增流程/EngineConn新增流程.png)
+
+## 一、LinkisManager接收客户端请求
+
+**名词解释**:
+
+- LinkisManager:是Linkis计算治理能力的管理中枢,主要的职责为:
+  1. 基于多级组合标签,为用户提供经过复杂路由、资源管控和负载均衡后的可用EngineConn;
+  
+  2. 提供EC和ECM的全生命周期管理能力;
+  
+  3. 为用户提供基于多级组合标签的多Yarn集群资源管理功能。主要分为 AppManager(应用管理器)、ResourceManager(资源管理器)、LabelManager(标签管理器)三大模块,能够支持多活部署,具备高可用、易扩展的特性。
+
+&nbsp;&nbsp;&nbsp;&nbsp;AM模块接收到Client的新增EngineConn请求后,首先会对请求做参数校验,判断请求参数的合法性;其次是通过复杂规则选中一台最合适的EngineConnManager(ECM),以用于后面的EngineConn启动;接下来会向RM申请启动该EngineConn需要的资源;最后是向ECM请求创建EngineConn。
+
+下面将对四个步骤进行详细说明。
+
+### 1. 请求参数校验
+
+&nbsp;&nbsp;&nbsp;&nbsp;AM模块在接受到引擎创建请求后首先会做参数判断,首先会做请求用户和创建用户的权限判断,接着会对请求带上的Label进行检查。因为在AM后续的创建流程当中,Label会用来查找ECM和进行资源信息记录等,所以需要保证拥有必须的Label,现阶段一定需要带上的Label有UserCreatorLabel(例:hadoop-IDE)和EngineTypeLabel(例:spark-2.4.3)。
+
+### 2. EngineConnManager(ECM)选择
+
+&nbsp;&nbsp;&nbsp;&nbsp;ECM选择主要是完成通过客户端传递过来的Label去选择一个合适的ECM服务去启动EngineConn。这一步中首先会通过LabelManager去通过客户端传递过来的Label去注册的ECM中进行查找,通过按照标签匹配度进行顺序返回。在获取到注册的ECM列表后,会对这些ECM进行规则选择,现阶段已经实现有可用性检查、资源剩余、机器负载等规则。通过规则选择后,会将标签最匹配、资源最空闲、负载低的ECM进行返回。
+
+### 3. EngineConn资源申请
+
+1. 在获取到分配的ECM后,AM接着会通过调用EngineConnPluginServer服务请求本次客户端的引擎创建请求会使用多少的资源,这里会通过封装资源请求,主要包含Label、Client传递过来的EngineConn的启动参数、以及从Configuration模块获取到用户配置参数,通过RPC调用ECP服务去获取本次的资源信息。
+
+2. EngineConnPluginServer服务在接收到资源请求后,会先通过传递过来的标签找到对应的引擎标签,通过引擎标签选择对应引擎的EngineConnPlugin。然后通过EngineConnPlugin的资源生成器,对客户端传入的引擎启动参数进行计算,算出本次申请新EngineConn所需的资源,然后返回给LinkisManager。
+   
+   **名词解释:**
+- EgineConnPlugin:是Linkis对接一个新的计算存储引擎必须要实现的接口,该接口主要包含了这种EngineConn在启动过程中必须提供的几个接口能力,包括EngineConn资源生成器、EngineConn启动命令生成器、EngineConn引擎连接器。具体的实现可以参考Spark引擎的实现类:[SparkEngineConnPlugin](https://github.com/WeBankFinTech/Linkis/blob/master/linkis-engineconn-plugins/engineconn-plugins/spark/src/main/scala/com/webank/wedatasphere/linkis/engineplugin/spark/SparkEngineConnPlugin.scala)。
+
+- EngineConnPluginServer:是加载了所有的EngineConnPlugin,对外提供EngineConn的所需资源生成能力和EngineConn的启动命令生成能力的微服务。
+
+- EngineConnPlugin资源生成器(EngineConnResourceFactory):通过传入的参数,计算出本次EngineConn启动时需要的总资源。
+
+- EngineConn启动命令生成器(EngineConnLaunchBuilder):通过传入的参数,生成该EngineConn的启动命令,以提供给ECM去启动引擎。
+3. AM在获取到引擎资源后,会接着调用RM服务去申请资源,RM服务会通过传入的Label、ECM、本次申请的资源,去进行资源判断。首先会判断客户端对应Label的资源是否足够,然后再会判断ECM服务的资源是否足够,如果资源足够,则本次资源申请通过,并对对应的Label进行资源的加减。
+
+### 4. 请求ECM创建引擎
+
+1. 在完成引擎的资源申请后,AM会封装引擎启动的请求,通过RPC发送给对应的ECM进行服务启动,并获取到EngineConn的实例对象;
+2. AM接着会去通过EngineConn的上报信息判断EngineConn是否启动成功变成可用状态,如果是就会将结果进行返回,本次新增引擎的流程也就结束。
+
+## 二、 ECM启动EngineConn
+
+名词解释:
+
+- EngineConnManager(ECM):EngineConn的管理器,提供引擎的生命周期管理,同时向RM汇报负载信息和自身的健康状况。
+
+- EngineConnBuildRequest:LinkisManager传递给ECM的启动引擎命令,里面封装了该引擎的所有标签信息、所需资源和一些参数配置信息。
+
+- EngineConnLaunchRequest:包含了启动一个EngineConn所需的BML物料、环境变量、ECM本地必需环境变量、启动命令等信息,让ECM可以依此构建出一个完整的EngineConn启动脚本。
+
+ECM接收到LinkisManager传递过来的EngineConnBuildRequest命令后,主要分为三步来启动EngineConn:1. 请求EngineConnPluginServer,获取EngineConnPluginServer封装出的EngineConnLaunchRequest;2. 解析EngineConnLaunchRequest,封装成EngineConn启动脚本;3. 执行启动脚本,启动EngineConn。
+
+### 2.1 EngineConnPluginServer封装EngineConnLaunchRequest
+
+通过EngineConnBuildRequest的标签信息,拿到实际需要启动的EngineConn类型和对应版本,从EngineConnPluginServer的内存中获取到该EngineConn类型的EngineConnPlugin,通过该EngineConnPlugin的EngineConnLaunchBuilder,将EngineConnBuildRequest转换成EngineConnLaunchRequest。
+
+### 2.2 封装EngineConn启动脚本
+
+ECM获取到EngineConnLaunchRequest之后,将EngineConnLaunchRequest中的BML物料下载到本地,并检查EngineConnLaunchRequest要求的本地必需环境变量是否存在,校验通过后,将EngineConnLaunchRequest封装成一个EngineConn启动脚本
+
+### 2.3 执行启动脚本
+
+目前ECM只对Unix系统做了Bash命令的支持,即只支持Linux系统执行该启动脚本。
+
+启动前,会通过sudo命令,切换到对应的请求用户去执行该脚本,确保启动用户(即JVM用户)为Client端的请求用户。
+
+执行该启动脚本后,ECM会实时监听脚本的执行状态和执行日志,一旦执行状态返回非0,则立马向LinkisManager汇报EngineConn启动失败,整个流程完成;否则则一直监听启动脚本的日志和状态,直到该脚本执行完成。
+
+## 三、EngineConn初始化
+
+ECM执行了EngineConn的启动脚本后,EngineConn微服务正式启动。
+
+名词解释:
+
+- EngineConn微服务:指包含了一个EngineConn、一个或多个Executor,用于对计算任务提供计算能力的实际微服务。我们说的新增一个EngineConn,其实指的就是新增一个EngineConn微服务。
+
+- EngineConn:引擎连接器,是与底层计算存储引擎的实际连接单元,包含了与实际引擎的会话信息。它与Executor的差别,是EngineConn只是起到一个连接、一个客户端的作用,并不真正的去执行计算。如SparkEngineConn,其会话信息为SparkSession。
+
+- Executor:执行器,作为真正的计算存储场景执行器,是实际的计算存储逻辑执行单元,对EngineConn各种能力的具体抽象,提供交互式执行、订阅式执行、响应式执行等多种不同的架构能力。
+
+EngineConn微服务的初始化一般分为三个阶段:
+
+1. 初始化具体引擎的EngineConn。先通过Java main方法的命令行参数,封装出一个包含了相关标签信息、启动信息和参数信息的EngineCreationContext,通过EngineCreationContext初始化EngineConn,完成EngineConn与底层Engine的连接建立,如:SparkEngineConn会在该阶段初始化一个SparkSession,用于与一个Spark application建立了连通关系。
+
+2. 初始化Executor。EngineConn初始化之后,接下来会根据实际的使用场景,初始化对应的Executor,为接下来的用户使用,提供服务能力。比如:交互式计算场景的SparkEngineConn,会初始化一系列可以用于提交执行SQL、PySpark、Scala代码能力的Executor,支持Client往该SparkEngineConn提交执行SQL、PySpark、Scala等代码。
+
+3. 定时向LinkisManager汇报心跳,并等待EngineConn结束退出。当EngineConn对应的底层引擎异常、或是超过最大空闲时间、或是Executor执行完成、或是用户手动kill时,该EngineConn自动结束退出。
+
+----
+
+到了这里,EngineConn的新增流程就基本结束了,最后我们再来总结一下EngineConn的新增流程:
+
+- 客户端向LinkisManager发起新增EngineConn的请求;
+
+- LinkisManager校验参数合法性,先是根据标签选择合适的ECM,再根据用户请求确认本次新增EngineConn所需的资源,向LinkisManager的RM模块申请资源,申请通过后要求ECM按要求启动一个新的EngineConn;
+
+- ECM先请求EngineConnPluginServer获取一个包含了启动一个EngineConn所需的BML物料、环境变量、ECM本地必需环境变量、启动命令等信息的EngineConnLaunchRequest,然后封装出EngineConn的启动脚本,最后执行启动脚本,启动该EngineConn;
+
+- EngineConn初始化具体引擎的EngineConn,然后根据实际的使用场景,初始化对应的Executor,为接下来的用户使用,提供服务能力。最后定时向LinkisManager汇报心跳,等待正常结束或被用户终止。
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Job\346\217\220\344\272\244\345\207\206\345\244\207\346\211\247\350\241\214\346\265\201\347\250\213.md" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Job\346\217\220\344\272\244\345\207\206\345\244\207\346\211\247\350\241\214\346\265\201\347\250\213.md"
new file mode 100644
index 0000000..a166df4
--- /dev/null
+++ "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Job\346\217\220\344\272\244\345\207\206\345\244\207\346\211\247\350\241\214\346\265\201\347\250\213.md"
@@ -0,0 +1,165 @@
+# Job提交准备执行流程
+
+计算任务(Job)的提交执行是Linkis提供的核心能力,它几乎串通了Linkis计算治理架构中的所有模块,在Linkis之中占据核心地位。
+
+我们将用户的计算任务从客户端提交开始,到最后的返回结果为止,整个流程分为三个阶段:提交 -> 准备 -> 执行,如下图所示:
+
+![计算任务整体流程图](../Images/Architecture/Job提交准备执行流程/计算任务整体流程图.png)
+
+其中:
+
+- Entrance作为提交阶段的入口,提供任务的接收、调度和Job信息的转发能力,是所有计算型任务的统一入口,它将把计算任务转发给Orchestrator进行编排和执行;
+
+- Orchestrator作为准备阶段的入口,主要提供了Job的解析、编排和执行能力。。
+
+- Linkis Manager:是计算治理能力的管理中枢,主要的职责为:
+  
+  1. ResourceManager:不仅具备对Yarn和Linkis EngineConnManager的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让ResourceManager具备跨集群、跨计算资源类型的全资源管理能力;
+  
+  2. AppManager:统筹管理所有的EngineConnManager和EngineConn,包括EngineConn的申请、复用、创建、切换、销毁等生命周期全交予AppManager进行管理;
+  
+  3. LabelManager:将基于多级组合标签,为跨IDC、跨集群的EngineConn和EngineConnManager路由和管控能力提供标签支持;
+  
+  4. EngineConnPluginServer:对外提供启动一个EngineConn的所需资源生成能力和EngineConn的启动命令生成能力。
+
+- EngineConnManager:是EngineConn的管理器,提供引擎的生命周期管理,同时向RM汇报负载信息和自身的健康状况。
+
+- EngineConn:是Linkis与底层计算存储引擎的实际连接器,用户所有的计算存储任务最终都会交由EngineConn提交给底层计算存储引擎。根据用户的不同使用场景,EngineConn提供了交互式计算、流式计算、离线计算、数据存储任务的全栈计算能力框架支持。
+
+接下来,我们将详细介绍计算任务从 提交 -> 准备 -> 执行 的三个阶段。
+
+## 一、提交阶段
+
+提交阶段主要是Client端 -> Linkis Gateway -> Entrance的交互,其流程如下:
+
+![提交阶段流程图](../Images/Architecture/Job提交准备执行流程/提交阶段流程图.png)
+
+1. 首先,Client(如前端或客户端)发起Job请求,Job请求信息精简如下(关于Linkis的具体使用方式,请参考 [如何使用Linkis](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/User_Manual/How_To_Use_Linkis.md)):
+
+```
+POST /api/rest_j/v1/entrance/submit
+```
+
+```json
+{
+    "executionContent": {"code": "show tables", "runType": "sql"},
+    "params": {"variable": {}, "configuration": {}},  //非必须
+    "source": {"scriptPath": "file:///1.hql"}, //非必须,仅用于记录代码来源
+    "labels": {
+        "engineType": "spark-2.4.3",  //指定引擎
+        "userCreator": "johnnwnag-IDE"  // 指定提交用户和提交系统
+    }
+}
+```
+
+2. Linkis-Gateway接收到请求后,根据URI ``/api/rest_j/v1/${serviceName}/.+``中的serviceName,确认路由转发的微服务名,这里Linkis-Gateway会解析出微服务名为entrance,将Job请求转发给Entrance微服务。需要说明的是:如果用户指定了路由标签,则在转发时,会根据路由标签选择打了相应标签的Entrance微服务实例进行转发,而不是随机转发。
+
+3. Entrance接收到Job请求后,会先简单校验请求的合法性,然后通过RPC调用JobHistory对Job的信息进行持久化,然后将Job请求封装为一个计算任务,放入到调度队列之中,等待被消费线程消费。
+
+4. 调度队列会为每个组开辟一个消费队列 和 一个消费线程,消费队列用于存放已经初步封装的用户计算任务,消费线程则按照FIFO的方式,不断从消费队列中取出计算任务进行消费。目前默认的分组方式为 Creator + User(即提交系统 + 用户),因此,即便是同一个用户,只要是不同的系统提交的计算任务,其实际的消费队列和消费线程都完全不同,完全隔离互不影响。(温馨提示:用户可以按需修改分组算法)
+
+5. 消费线程取出计算任务后,会将计算任务提交给Orchestrator,由此正式进入准备阶段。
+
+## 二、 准备阶段
+
+准备阶段主要有两个流程,一是向LinkisManager申请一个可用的EngineConn,用于接下来的计算任务提交执行,二是Orchestrator对Entrance提交过来的计算任务进行编排,将一个用户计算请求,通过编排转换成一个物理执行树,然后交给第三阶段的执行阶段去真正提交执行。
+
+#### 2.1 向LinkisManager申请可用EngineConn
+
+如果在LinkisManager中,该用户存在可复用的EngineConn,则直接锁定该EngineConn,并返回给Orchestrator,整个申请流程结束。
+
+如何定义可复用EngineConn?指能匹配计算任务的所有标签要求的,且EngineConn本身健康状态为Healthy(负载低且实际EngineConn状态为Idle)的,然后再按规则对所有满足条件的EngineConn进行排序选择,最终锁定一个最佳的EngineConn。
+
+如果该用户不存在可复用的EngineConn,则此时会触发EngineConn新增流程,关于EngineConn新增流程,请参数:[EngineConn新增流程](EngineConn新增流程.md) 。
+
+#### 2.2 计算任务编排
+
+Orchestrator主要负责将一个计算任务(JobReq),编排成一棵可以真正执行的物理执行树(PhysicalTree),并提供Physical树的执行能力。
+
+这里先重点介绍Orchestrator的计算任务编排能力,如下图:
+
+![编排流程图](../Images/Architecture/Job提交准备执行流程/编排流程图.png)
+
+其主要流程如下:
+
+- Converter(转换):完成对用户提交的JobReq(任务请求)转换为Orchestrator的ASTJob,该步骤会对用户提交的计算任务进行参数检查和信息补充,如变量替换等;
+
+- Parser(解析):完成对ASTJob的解析,将ASTJob拆成由ASTJob和ASTStage组成的一棵AST树。
+
+- Validator(校验): 完成对ASTJob和ASTStage的检验和信息补充,如代码检查、必须的Label信息补充等。
+
+- Planner(计划):将一棵AST树转换为一棵Logical树。此时的Logical树已经由LogicalTask组成,包含了整个计算任务的所有执行逻辑。
+
+- Optimizer(优化阶段):将一棵Logical树转换为Physica树,并对Physical树进行优化。
+
+一棵Physical树,其中的很多节点都是计算策略逻辑,只有中间的ExecTask,才真正封装了将用户计算任务提交给EngineConn进行提交执行的执行逻辑。如下图所示:
+
+![Physical树](../Images/Architecture/Job提交准备执行流程/Physical树.png)
+
+不同的计算策略,其Physical树中的JobExecTask 和 StageExecTask所封装的执行逻辑各不相同。
+
+如多活计算策略下,用户提交的一个计算任务,其提交给不同集群的EngineConn进行执行的执行逻辑封装在了两个ExecTask中,而相关的多活策略逻辑则体现在了两个ExecTask的父节点StageExecTask(End)之中。
+
+这里举多活计算策略下的多读场景。
+
+多读时,实际只要求一个ExecTask返回结果,该Physical树就可以标记为执行成功并返回结果了,但Physical树只具备按依赖关系进行依次执行的能力,无法终止某个节点的执行,且一旦某个节点被取消执行或执行失败,则整个Physical树其实会被标记为执行失败,这时就需要StageExecTask(End)来做一些特殊的处理,来保证既可以取消另一个ExecTask,又能把执行成功的ExecTask所产生的结果集继续往上传,让Physical树继续往上执行。这就是StageExecTask所代表的计算策略执行逻辑。
+
+Linkis Orchestrator的编排流程与很多SQL解析引擎(如Spark、Hive的SQL解析器)存在相似的地方,但实际上,Linkis Orchestrator是面向计算治理领域针对用户不同的计算治理需求,而实现的解析编排能力,而SQL解析引擎是面向SQL语言的解析编排。这里做一下简单区分:
+
+1. Linkis Orchestrator主要想解决的,是不同计算任务对计算策略所引发出的编排需求。如:用户想具备多活的能力,则Orchestrator会为用户提交的一个计算任务,基于“多活”的计算策略需求,编排出一棵Physical树,从而做到往多个集群去提交执行这个计算任务,并且在构建整个Physical树的过程中,已经充分考虑了各种可能存在的异常场景,并都已经体现在了Physical树中。
+
+2. Linkis Orchestrator的编排能力与编程语言无关,理论上只要是Linkis已经对接的引擎,其支持的所有编程语言都支持编排;而SQL解析引擎只关心SQL的解析和执行,只负责将一条SQL解析成一颗可执行的Physical树,最终计算出结果。
+
+3. Linkis Orchestrator也具备对SQL的解析能力,但SQL解析只是Orchestrator Parser针对SQL这种编程语言的其中一种解析实现。Linkis Orchestrator的Parser也考虑引入Apache Calcite对SQL进行解析,支持将一条跨多个计算引擎(必须是Linkis已经对接的计算引擎)的用户SQL,拆分成多条子SQL,在执行阶段时分别提交给对应的计算引擎进行执行,最后选择一个合适的计算引擎进行汇总计算。
+
+关于Orchestrator的编排详细介绍,请参考:[Orchestrator架构设计](https://github.com/WeBankFinTech/Linkis-Doc/blob/master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_architecture_doc.md)
+
+经过了Linkis Orchestrator的解析编排后,用户的计算任务已经转换成了一颗可被执行的Physical树。Orchestrator会将该Physical树提交给Orchestrator的Execution模块,进入最后的执行阶段。
+
+## 三、执行阶段
+
+执行阶段主要分为如下两步,这两步是Linkis Orchestrator提供的最后两阶段的能力:
+
+![执行阶段流程图](../Images/Architecture/Job提交准备执行流程/执行阶段流程图.png)
+
+其主要流程如下:
+
+- Execution(执行):解析Physical树的依赖关系,按照依赖从叶子节点开始依次执行。
+
+- Reheater(再热):一旦Physical树有节点执行完成,都会触发一次再热。再热允许依照Physical树的实时执行情况,动态调整Physical树,继续进行执行。如:检测到某个叶子节点执行失败,且该叶子节点支持重试(如失败原因是抛出了ReTryExecption),则自动调整Physical树,在该叶子节点上面添加一个内容完全相同的重试父节点。
+
+我们回到Execution阶段,这里重点介绍封装了将用户计算任务提交给EngineConn的ExecTask节点的执行逻辑。
+
+1. 前面有提到,准备阶段的第一步,就是向LinkisManager获取一个可用的EngineConn,ExecTask拿到这个EngineConn后,会通过RPC请求,将用户的计算任务提交给EngineConn。
+
+2. EngineConn接收到计算任务之后,会通过线程池异步提交给底层的计算存储引擎,然后马上返回一个执行ID。
+
+3. ExecTask拿到这个执行ID后,后续可以通过该执行ID异步去拉取计算任务的执行情况(如:状态、进度、日志、结果集等)。
+
+4. 同时,EngineConn会通过注册的多个Listener,实时监听底层计算存储引擎的执行情况。如果该计算存储引擎不支持注册Listener,则EngineConn会为计算任务启动守护线程,定时向计算存储引擎拉取执行情况。
+
+5. EngineConn将拉取到的执行情况,通过RCP请求,实时传回Orchestrator所在的微服务。
+
+6. 该微服务的Receiver接收到执行情况后,会通过ListenerBus进行广播,Orchestrator的Execution消费该事件并动态更新Physical树的执行情况。
+
+7. 计算任务所产生的结果集,会在EngineConn端就写入到HDFS等存储介质之中。EngineConn通过RPC传回的只是结果集路径,Execution消费事件,并将获取到的结果集路径通过ListenerBus进行广播,使Entrance向Orchestrator注册的Listener能消费到该结果集路径,并将结果集路径写入持久化到JobHistory之中。
+
+8. EngineConn端的计算任务执行完成后,通过同样的逻辑,会触发Execution更新Physical树该ExecTask节点的状态,使得Physical树继续往上执行,直到整棵树全部执行完成。这时Execution会通过ListenerBus广播计算任务执行完成的状态。
+
+9. Entrance向Orchestrator注册的Listener消费到该状态事件后,向JobHistory更新Job的状态,整个任务执行完成。
+
+----
+
+最后,我们再来看下Client端是如何得知计算任务状态变化,并及时获取到计算结果的,具体如下图所示:
+
+![结果获取流程](../Images/Architecture/Job提交准备执行流程/结果获取流程.png)
+
+具体流程如下:
+
+1. Client端定时轮询请求Entrance,获取计算任务的状态。
+
+2. 一旦发现状态翻转为成功,则向JobHistory发送获取Job信息的请求,拿到所有的结果集路径
+
+3. 通过结果集路径向PublicService发起查询文件内容的请求,获取到结果集的内容。
+
+自此,整个Job的提交 -> 准备 -> 执行 三个阶段全部完成。
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Linkis1.0\344\270\216Linkis0.X\347\232\204\345\214\272\345\210\253\347\256\200\350\277\260.md" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Linkis1.0\344\270\216Linkis0.X\347\232\204\345\214\272\345\210\253\347\256\200\350\277\260.md"
new file mode 100644
index 0000000..78d2d9d
--- /dev/null
+++ "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Linkis1.0\344\270\216Linkis0.X\347\232\204\345\214\272\345\210\253\347\256\200\350\277\260.md"
@@ -0,0 +1,98 @@
+## 1. 简述
+
+&nbsp;&nbsp;&nbsp;&nbsp;  首先,Linkis1.0 架构下的 Entrance 和 EngineConnManager(原EngineManager)服务与 **引擎** 已完全无关,即:
+                             在 Linkis1.0 架构下,每个引擎无需再配套实现并启动对应的 Entrance 和 EngineConnManager,Linkis1.0 的每个 Entrance 和 EngineConnManager 都可以给所有引擎共用。
+                          
+&nbsp;&nbsp;&nbsp;&nbsp;  其次,Linkis1.0 新增了Linkis-Manager服务用于对外提供 AppManager(应用管理)、ResourceManager(资源管理,原ResourceManager服务)和 LabelManager(标签管理)的能力。
+
+&nbsp;&nbsp;&nbsp;&nbsp;  然后,为了降低大家实现和部署一个新引擎的难度,Linkis 1.0 重新架构了一个叫 EngineConnPlugin 的模块,每个新引擎只需要实现 EngineConnPlugin 接口即可,
+Linkis EngineConnPluginServer 支持以插件的形式动态加载 EngineConnPlugin(新引擎),一旦 EngineConnPluginServer 加载成功,EngineConnManager 便可为用户快速启动一个该引擎实例。
+                          
+&nbsp;&nbsp;&nbsp;&nbsp;  最后,对Linkis的所有微服务进行了归纳分类,总体分为了三个大层次:公共增强服务、计算治理服务和微服务治理服务,从代码层级结构、微服务命名和安装目录结构等多个方面来规范Linkis1.0的微服务体系。
+
+
+##  2. 主要特点
+
+1.  **强化计算治理**,Linkis1.0主要从引擎管理、标签管理、ECM管理和资源管理等几个方面,全面强化了计算治理的综合管控能力,基于标签化的强大管控设计理念,使得Linkis1.0向多IDC化、多集群化、多容器化,迈出了坚实的一大步。
+
+2.  **简化用户实现新引擎**,EnginePlugin用于将原本实现一个新引擎,需要实现的相关接口和类,以及需要拆分的Entrance-EngineManager-Engine三层模块体系,融合到了一个接口之中,简化用户实现新引擎的流程和代码,真正做到只要实现一个类,就能接入一个新引擎。
+
+3.  **全栈计算存储引擎支持**,实现对计算请求场景(如Spark)、存储请求场景(如HBase)和常驻集群型服务(如SparkStreaming)的全面覆盖支持。
+
+4.  **高级计算策略能力改进**,新增Orchestrator实现丰富计算任务管理策略,且支持基于标签的解析和编排。
+
+5.  **安装部署改进**  优化一键安装脚本,支持容器化部署,简化用户配置。
+
+## 3. 服务对比
+
+&nbsp;&nbsp;&nbsp;&nbsp;  请参考以下两张图:
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Linkis0.X 微服务列表如下:
+
+![Linkis0.X服务列表](./../../en_US/Images/Architecture/Linkis0.X-services-list.png)
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Linkis1.0 微服务列表如下:
+
+![Linkis1.0服务列表](./../../en_US/Images/Architecture/Linkis1.0-services-list.png)
+
+&nbsp;&nbsp;&nbsp;&nbsp;  从上面两个图中看,Linkis1.0 将服务分为了三类服务:计算治理(英文缩写CG)/微服务治理(MG)/公共增强服务(PS)。其中:
+
+1. 计算治理的一大变化是,Entrance 和 EngineConnManager服务与引擎再不相关,实现一个新引擎只需实现 EngineConnPlugin插件即可,EngineConnPluginServer会动态加载 EngineConnPlugin 插件,做到引擎热插拔式更新;
+
+2. 计算治理的另一大变化是,LinkisManager作为 Linkis 的管理大脑,抽象和定义了 AppManager(应用管理)、ResourceManager(资源管理)和LabelManager(标签管理);
+
+3. 微服务治理服务,将0.X部分的Eureka和Gateway服务进行了归并统一,并对Gateway服务进行了功能增强,支持按照Label进行路由转发;
+
+4. 公共增强服务,主要将0.X部分的BML服务/上下文服务/数据源服务/公共服务进行了优化和归并统一,便于大家管理和查看。
+
+## 4. Linkis Manager简介
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Linkis Manager 作为 Linkis 的管理大脑,主要由 AppManager、ResourceManager 和 LabelManager 组成。
+
+&nbsp;&nbsp;&nbsp;&nbsp;  ResourceManager 不仅具备 Linkis0.X 对 Yarn 和 Linkis EngineManager 的资源管理能力,还将提供基于标签的多级资源分配和回收能力,让 ResourceManager 具备跨集群、跨计算资源类型的全资源管理能力;
+
+&nbsp;&nbsp;&nbsp;&nbsp;  AppManager 将统筹管理所有的 EngineConnManager 和 EngineConn,EngineConn 的申请、复用、创建、切换、销毁等生命周期全交予 AppManager进行管理;
+
+&nbsp;&nbsp;&nbsp;&nbsp;  而 LabelManager 将基于多级组合标签,提供跨IDC、跨集群的 EngineConn 和 EngineConnManager 路由和管控能力;
+
+## 5. Linkis EngineConnPlugin简介
+
+&nbsp;&nbsp;&nbsp;&nbsp;  EngineConnPlugin 主要用于降低新计算存储的接入和部署成本,真正做到让用户“只需实现一个类,就能接入一个全新计算存储引擎;只需执行一下脚本,即可快速部署一个全新引擎”。
+
+### 5.1 新引擎实现对比
+
+&nbsp;&nbsp;&nbsp;&nbsp;  以下是用户Linkis0.X实现一个新引擎需要实现的相关接口和类:
+
+![Linkis0.X 如何实现一个全新引擎](./../../en_US/Images/Architecture/Linkis0.X-NewEngine-architecture.png)
+
+&nbsp;&nbsp;&nbsp;&nbsp;  以下为Linkis1.0.0,实现一个新引擎,用户需实现的接口和类:
+
+![Linkis1.0 如何实现一个全新引擎](./../../en_US/Images/Architecture/Linkis1.0-NewEngine-architecture.png)
+
+&nbsp;&nbsp;&nbsp;&nbsp;  其中EngineConnResourceFactory和EngineLaunchBuilder为非必需实现接口,只有EngineConnFactory为必需实现接口。
+
+### 5.2 新引擎启动流程
+
+&nbsp;&nbsp;&nbsp;&nbsp;  EngineConnPlugin 提供了 Server 服务,用于启动和加载所有的引擎插件,以下给出了一个新引擎启动,访问了 EngineConnPlugin-Server 的全部流程:
+
+![Linkis 引擎启动流程](./../../en_US/Images/Architecture/Linkis1.0-newEngine-initialization.png)
+
+## 6. Linkis EngineConn简介
+
+&nbsp;&nbsp;&nbsp;&nbsp;  EngineConn,即原 Engine 模块,作为 Linkis 与底层计算存储引擎进行连接和交互的实际单元,是 Linkis 提供计算存储能力的基础。
+
+&nbsp;&nbsp;&nbsp;&nbsp;  Linkis1.0 的 EngineConn 主要由 EngineConn 和 Executor构成。其中:
+
+a)	EngineConn 为连接器,包含引擎与具体集群的会话信息。它只是起到一个连接,一个客户端的作用,并不真正的去执行计算。
+
+b)	Executor 为执行器,作为真正的计算场景执行器,是实际的计算逻辑执行单元,也对引擎各种具体能力的抽象,例如提供加锁、访问状态、获取日志等多种不同的服务。
+
+c)	Executor 通过 EngineConn 中的会话信息进行创建,一个引擎类型可以支持多种不同种类的计算任务,每种对应一个 Executor 的实现,计算任务将被提交到对应的 Executor 进行执行。
+这样,同一个引擎能够根据不同的计算场景提供不同的服务。比如常驻式引擎启动后不需要加锁,一次性引擎启动后不需要支持 Receiver 和访问状态等。
+
+d)	采用 Executor 和 EngineConn 分离的方式的好处是,可以避免 Receiver 耦合业务逻辑,本身只保留 RPC 通信功能。将服务分散在多个 Executor 模块中,并且抽象成几大类引擎:交互式计算引擎、流式引擎、一次性引擎等等可能用到的,构建成统一的引擎框架,便于后期的扩充。
+这样不同类型引擎可以根据需要分别加载其中需要的能力,大大减少引擎实现的冗余。
+
+&nbsp;&nbsp;&nbsp;&nbsp;  如下图所示:
+
+![Linkis EngineConn架构图](./../../en_US/Images/Architecture/Linkis1.0-EngineConn-architecture.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/Gateway.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/Gateway.md
new file mode 100644
index 0000000..f84d9dd
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/Gateway.md
@@ -0,0 +1,30 @@
+## Gateway 架构设计
+
+#### 简述
+Gateway网关是Linkis接受客户端以及外部请求的首要入口,例如接收作业执行请求,而后将执行请求转发到具体的符合条件的Entrance服务中去。
+整个架构底层基于SpringCloudGateway做扩展实现,上层叠加了与Http请求解析,会话权限,标签路由和WebSocket多路转发等相关的模组设计,整体架构可见如下。
+
+### 整体架构示意图
+
+![Gateway整体架构示意图](../../Images/Architecture/Gateway/gateway_server_global.png)
+
+#### 架构说明
+- gateway-core: Gateway的核心接口定义模块,主要定义了GatewayParser和GatewayRouter接口,分别对应请求的解析和根据请求进行路由选择;同时还提供了SecurityFilter的权限校验工具类。
+- spring-cloud-gateway: 该模块集成了所有与SpringCloudGateway相关的依赖,对HTTP和WebSocket两种协议类型的请求分别进行了处理转发。
+- gateway-server-support: Gateway的服务驱动模块,依赖spring-cloud-gateway模块,对GatewayParser、GatewayRouter分别做了实现,其中DefaultLabelGatewayRouter提供了请求标签路由的功能。
+- gateway-httpclient-support: 提供了Http访问Gateway服务的客户端通用类,z可以基于做多实现。
+- instance-label: 外联的实例标签模块,提供InsLabelService服务接口,用于路由标签的创建以及与应用实例关联。
+
+涉及的详细设计如下:
+
+#### 一、请求路由转发(带标签信息)
+请求的链路首先经SpringCloudGateway的Dispatcher分发后,进入网关的过滤器链表,进入GatewayAuthorizationFilter 和 SpringCloudGatewayWebsocketFilter 两大过滤器逻辑,过滤器集成了DefaultGatewayParser和DefaultGatewayRouter。
+从Parser到Router,执行相应的parse和route方法,DefaultGatewayParser和DefaultGatewayRouter内部还包含了自定义的Parser和Router,按照优先级顺序执行。最后由DefaultGatewayRouter输出路由选中的服务实例ServiceInstance,交由上层进行转发。
+现我们以具有标签信息的作业执行请求转发为例子,绘制如下流程图:  
+![Gateway请求路由转发](../../Images/Architecture/Gateway/gateway_server_dispatcher.png)
+
+
+#### 二、WebSocket连接转发管理
+默认情况下SpringCloudGateway对WebSocket请求只做一次路由转发,无法做动态的切换,而在Linkis Gateway架构下,每次信息的交互都会附带相应的uri地址,引导路由到不同的后端服务。
+除了负责与前端、客户端连接的webSocketService以及负责和后台服务连接的webSocketClient, 中间会缓存一系列GatewayWebSocketSessionConnection列表,一个GatewayWebSocketSessionConnection代表一个session会话与多个后台ServiceInstance的连接。  
+![Gateway的WebSocket转发管理](../../Images/Architecture/Gateway/gatway_websocket.png)
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/README.md
new file mode 100644
index 0000000..a5bbc92
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Microservice_Governance_Services/README.md
@@ -0,0 +1,23 @@
+## **背景**
+
+微服务治理包含了Gateway、Eureka、Open Feign等三个主要的微服务。用来解决Linkis的服务发现与注册、统一网关、请求转发、服务间通信、负载均衡等问题。同时Linkis1.0还会提供对Nacos的支持;整个Linkis是一个完全的微服务架构,每个业务流程都是需要多个微服务协同完成的。
+
+## **架构图**
+
+![](../../Images/Architecture/linkis-microservice-gov-01.png)
+
+## **架构描述**
+
+1. Linkis Gateway作为Linkis的网关入口,主要承担了请求转发、用户访问认证、WebSocket通信等职责。Linkis1.0的Gateway还新增了基于Label的路由转发能力。Linkis在Spring
+Cloud Gateway中,实现了WebSocket路由转发器,用于与客户端建立WebSocket连接,建立连接成功后,会自动分析客户端的WebSocket请求,通过规则判断出请求该转发给哪个后端微服务,从而将WebSocket请求转发给对应的后端微服务实例。
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[进入Linkis Gateway](Gateway.md)
+
+2. Linkis Eureka
+主要负责服务注册与发现,Eureka由多个instance(服务实例)组成,这些服务实例可以分为两种:Eureka Server和Eureka Client。为了便于理解,我们将Eureka client再分为Service
+Provider和Service Consumer。Eureka Server 提供服务注册和发现,Service Provider服务提供方,将自身服务注册到Eureka,从而使服务消费方能够找到Service
+Consumer服务消费方,从Eureka获取注册服务列表,从而能够消费服务。
+
+3. Linkis基于Feign实现了一套自己的底层RPC通信方案。Linkis RPC作为底层的通信方案,将提供SDK集成到有需要的微服务之中。一个微服务既可以作为请求调用方,也可以作为请求接收方。作为请求调用方时,将通过Sender请求目标接收方微服务的Receiver,作为请求接收方时,将提供Receiver用来处理请求接收方Sender发送过来的请求,以便完成同步响应或异步响应。
+   
+   ![](../../Images/Architecture/linkis-microservice-gov-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Computation_Orchestrator_architecture.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Computation_Orchestrator_architecture.md
new file mode 100644
index 0000000..6787bb4
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Computation_Orchestrator_architecture.md
@@ -0,0 +1,18 @@
+## **Computation-Orchestrator架构**
+
+### **一. Computation-Orchestrator概念**
+
+Computation-Orchestrator是Orchestrator的标准实现,支持交互式引擎的任务编排。Computation-Orchestrator提供了Converter、Parser、Validator、Planner、Optimizer、Execution、Reheater的常用实现方法。Computation-Orchestrator与AM对接,负责交互式任务执行,可以与Entrance对接,也可以与其它任务提交端直接对接,比如IOClient。Computation-Orchestrator同时支持同步和异步方式提交任务,并且支持获取多个Session实现隔离,
+
+### **二. Computation-Orchestrator架构**
+
+Entrance提交任务到Computation-Orchestrator执行,会同时注册日志、进度和结果集的Listener。任务执行过程中,会收到任务日志、任务进度,都会调用已注册的listener,将任务信息返回给Entrance。任务执行结束后,会生成结果集的Response,并调用结果集Listener。其中,Orchestrator支持Entrance提交绑定单个EngineConn的任务,通过任务中添加BindEngineLabel实现。
+
+![](../../Images/Architecture/orchestrator/computation-orchestrator/linkis-computation-orchestrator-01.png)
+
+### **三. Computation-Orchestrator执行流程**
+
+Computation-Orchestrator执行流程如下图所示
+
+![](../../Images/Architecture/orchestrator/computation-orchestrator/linkis-computation-orchestrator-02.png)
+
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/1.0\344\270\255\347\224\250\346\210\267\351\234\200\345\256\236\347\216\260\347\232\204\346\216\245\345\217\243\345\222\214\347\261\273.png" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/1.0\344\270\255\347\224\250\346\210\267\351\234\200\345\256\236\347\216\260\347\232\204\346\216\245\345\217\243\345\222\214\347\261\273.png"
new file mode 100644
index 0000000..4830d0f
Binary files /dev/null and "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/1.0\344\270\255\347\224\250\346\210\267\351\234\200\345\256\236\347\216\260\347\232\204\346\216\245\345\217\243\345\222\214\347\261\273.png" differ
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\344\272\244\344\272\222\346\265\201\347\250\213.png" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\344\272\244\344\272\222\346\265\201\347\250\213.png"
new file mode 100644
index 0000000..9e76bdd
Binary files /dev/null and "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\344\272\244\344\272\222\346\265\201\347\250\213.png" differ
diff --git "a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\347\233\270\345\205\263\346\216\245\345\217\243\345\222\214\347\261\273.png" "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\347\233\270\345\205\263\346\216\245\345\217\243\345\222\214\347\261\273.png"
new file mode 100644
index 0000000..0c20d81
Binary files /dev/null and "b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Images/\347\233\270\345\205\263\346\216\245\345\217\243\345\222\214\347\261\273.png" differ
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_CheckRuler.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_CheckRuler.md
new file mode 100644
index 0000000..6c89f13
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_CheckRuler.md
@@ -0,0 +1,27 @@
+CheckRuler架构设计
+======
+
+CheckRuler用于在Converter和Validator之前进行检查的规则,用于检验传递参数的的合法性和完整性,除了自带的几种必要的Ruler,其余可以根据用户自身需要进行实现。
+
+**Convert阶段:**
+
+| 类名                                     | 继承类               | 作用                    |
+|------------------------------------------|----------------------|-------------------------|
+| JobReqParamCheckRuler                    | ConverterCheckRulter | 校验提交的job参数完整性 |
+| PythonCodeConverterCheckRuler            | ConverterCheckRulter | Python代码规范性检测    |
+| ScalaCodeConverterCheckRuler             | ConverterCheckRulter | Scala代码规范检测       |
+| ShellDangerousGrammarConverterCheckRuler | ConverterCheckRulter | Shell脚本代码规范性检测 |
+| SparkCodeCheckConverterCheckRuler        | ConverterCheckRulter | Spark代码规范性检测     |
+| SQLCodeCheckConverterCheckRuler          | ConverterCheckRulter | SQL代码规范性检测       |
+| SQLLimitConverterCheckRuler              | ConverterCheckRulter | SQL代码长度检测         |
+| VarSubstitutionConverterCheckRuler       | ConverterCheckRulter | 变量替换规则校验        |
+
+**Validator阶段:**
+
+| 类名                          | 继承类                 | 作用                |
+|-------------------------------|------------------------|---------------------|
+| LabelRegularCheckRuler        | ValidatorCheckRuler    | Job的标签合法性校验 |
+| DefaultLabelRegularCheckRuler | LabelRegularCheckRuler | 实现类              |
+| RouteLabelRegularCheckRuler   | LabelRegularCheckRuler | 实现类              |
+
+如果需要自定义新的validator阶段的校验规则,自定义校验更多的标签类型,可以继承LabelRegularCheckRuler,重写customLabel值即可
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_ECMP_architecture.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_ECMP_architecture.md
new file mode 100644
index 0000000..6ea3abf
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_ECMP_architecture.md
@@ -0,0 +1,32 @@
+EngineConnPlugin架构设计
+------------------------
+
+EngineConnPlugin用于将原本实现一个新引擎,需要实现的相关接口和类,以及需要拆分的Entrance-EngineManager-Engine三层模块体系,融合到了一个接口之中,简化用户实现新引擎的流程和代码,真正做到只要实现一个类,就能接入一个新引擎。
+
+### EngineConnPlugin 架构实现
+
+1、Linkis 0.X版本痛点与思考
+
+Linkis
+0.X版本没有Plugin的概念,用户新增一个引擎,需要同时实现Entrance、EngineManager、Engine相关接口,开发工作量和维护工作量都较大,修改也比较复杂。
+
+以下是用户Linkis0.X实现一个新引擎需要实现的相关接口和类:
+
+![](Images/相关接口和类.png)
+
+2、新版本的改进
+
+Linkis
+1.0版本重构了引擎从创建到任务执行的整个逻辑,将Entrance简化为一个服务,通过标签来对接不同的Engine、EngineManager也会简化为一个。Engine定义为EngineConn连接器+Executor执行器,并且抽象成多个服务和模块,由用户根据需要灵活选取需要的服务和模块。这样大大减少了新增引擎的开发和维护工作量。并且plugin会将引擎的lib和conf动态添加到bml进行版本管理。
+
+以下为Linkis1.0.0,实现一个新引擎,用户需实现的接口和类:
+
+![](Images/1.0中用户需实现的接口和类.png)
+
+其中EngineConnResourceFactory和EngineLaunchBuilder为非必需实现接口,只有EngineConnFactory为必需实现接口。
+
+### EngineConnPlugin交互流程
+
+EngineConnPlugin提供了Server服务,用于启动和加载所有的引擎插件,以下给出了一个新引擎启动,访问了EngineConnPlugin-Server的全部流程:
+
+![](Images/交互流程.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Execution_architecture_doc.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Execution_architecture_doc.md
new file mode 100644
index 0000000..1bf3e5f
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Execution_architecture_doc.md
@@ -0,0 +1,19 @@
+Orchestrator-Execution架构设计
+===
+
+
+## 一. Execution概念
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Orchestrator-Execution模块是Orchestrator的执行模块,用于调度执行编排后的PhysicalTree,在执行的时候会从JobEndExecTask开始进行依赖执行。Execution的调用有Orchestration的执行和异步执行发起,然后Execution负责调度执行RootExecTask(PhysicalTree的根节点)整合树的ExecTask运行,并封装所有execTask的执行响应进行返回。执行采用生产者消费者异步执行模式进行运行。
+
+## 二. Execution架构
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Execution在接受到RootExecTask执行后,会将RootExecTask给到TaskManager进行调度执行(生产),然后TaskComsumer会从TaskManager获取现在可以依赖执行的任务进行消费执行,拿到可以运行的ExecTask后会提交给TaskScheduler进行提交执行。
+
+![execution](../../Images/Architecture/orchestrator/execution/execution.png)
+
+不管是异步执行和同步执行,都是通过上面的流程进行调度异步执行,同步执行会调用ExecTask的waitForCompleted方法,完成同步响应获取。整个执行过程中ExecTask的状态、结果集、日志等信息通过ListenerBus进行投递和通知。
+
+## 三. Execution整体流程
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Execution的整体执行流程如下所示,下图已交互式执行(ComputationExecution)流程为例:
+
+![execution01](../../Images/Architecture/orchestrator/execution/execution01.png)
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Operation_architecture_doc.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Operation_architecture_doc.md
new file mode 100644
index 0000000..94fd889
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Operation_architecture_doc.md
@@ -0,0 +1,26 @@
+Orchestrator-Operation架构设计
+===
+
+## 一. Operation概念
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Operation操作是用于扩展异步执行期间对任务的额外操作,在调用Orchestration的异步执行后,调用者获取到的是OrchestrationFuture,该接口里面只提供了cancel、waitForCompleted、getResponse等操作任务的方法。但是当我们需要获取任务日志、进度、暂停任务时没有调用人口,这也是Operation定义的初衷,用于对外扩展更多对异步运行的任务的额外能力。定义如下:
+
+
+## 二. Operation类图
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Operation采用的是用户扩展的方式,用户需要扩展操作时,只需要按照我们的Operation接口实现对应的类,然后注册到Orchestrator,不需要改动底层代码即可以拥有对应的操作。整体类图如下:
+
+![operation_class](../../Images/Architecture/orchestrator/operation/operation_class.png)
+
+
+## 三. Operation使用
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Operation的使用主要分为两步,首先是Operation注册,然后是Operation调用:
+1. 注册方式,首先是按照第二章的Operation接口实现对应的Operation实现类,然后通过`OrchestratorSessionBuilder`完成Operation的注册,这样通过`OrchestratorSessionBuilder`创建出来的OrchestratorSession中的SessionState是持有Operation的;
+2. Operation的使用需要在使用通过OrchestratorSession完成编排后,调用Orchestration的异步执行方法asyncExecute获取OrchestrationFuture才可以进行;
+3. 接着通过Operation操作name,如“LOG”日志,调用`OrchestrationFuture.operate("LOG")` 进行操作获取对应Operation的返回对象,
+
+## 四. Operation例子
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;以下通过日志操作来为例进行说明,LogOperation的定义在第二章有说明,LogOperation通过实现Operation和TaskLogListener两个接口。整体日志获取流程如下:
+1. 当Orchestrator接收到任务日志后,会通过listenerBus推送event给到LogOperation进行消费;
+2. 当LogOperation获取到日志后,会调用日志处理器LogProcessor进行写日志(writeLog),该LogProcessor会通过调用方调用方法`OrchestrationFuture.operate("LOG")`获取到;
+3. LogProcessor有两种给到外部获取日志的方式,一种是通知模式,外部调用方可以注册日志listener方法给到日志处理器,当日志处理器的writeLog方法被调用后后会调用所有的listener进行通知
+4. 一种是主动拉取模式,通过调用LogProcessor的getLog方法主动获取日志
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Reheater_architecture.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Reheater_architecture.md
new file mode 100644
index 0000000..0eba15a
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Reheater_architecture.md
@@ -0,0 +1,12 @@
+## **Orchestrator Reheater架构**
+
+### **一. Reheater概念**
+
+Orchestrator-Reheater模块是Orchestrator的重放模块,用于在执行过程中,动态调整JobGroup的执行计划,为JobGroup动态添加Job、Stage和Task。从而避免网络等原因引起的子任务失败。目前主要有任务相关的TaskReheater,包含重试类型的RetryTaskReheater
+
+### **二. Reheater架构图**
+
+![](../../Images/Architecture/orchestrator/reheater/linkis-orchestrator-reheater-01.png)
+
+Reheater在任务执行过程中,会收到ReheaterEvent,从而会对编排后的PhysicalTree进行调整,动态添加Job、Stage、Task。目前常用的有TaskReheater,包含重试类型的RetryTaskReheater、切换类型的SwitchTaskReheater,以及执行失败任务时的任务信息写入PlaybackService的PlaybackWrittenTaskReheater。
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Transform_architecture.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Transform_architecture.md
new file mode 100644
index 0000000..bbf0ef3
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_Transform_architecture.md
@@ -0,0 +1,12 @@
+## **Orchestrator-Transform架构**
+
+### **一. Transtform概念**
+
+Orchestrator中定义了任务调度编排不同阶段的结构,从ASTTree到LogicalTree,再到PhysicalTree,这些不同结构的转换,需要用到Transform模块。Transform模块定义了转换过程,Convert需要调用各种Transform,来进行任务结构的转换和生成。
+
+## **二. Transform架构**
+
+Transform嵌入在整个转换过程中,从Parser到Execution,每个阶段间会有Transform的实现类,分别将初始的JobReq转换成ASTTree、LogicalTree和PhysicalTree,PhysicalTree提交Execution执行。
+
+![](../../Images/Architecture/orchestrator/transform/linkis-orchestrator-transform-01.png)
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_architecture_doc.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_architecture_doc.md
new file mode 100644
index 0000000..c4b14ad
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/Orchestrator_architecture_doc.md
@@ -0,0 +1,113 @@
+Orchestrator 整体架构设计
+===
+
+## 一. Orchestrator概念
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Orchestrator计算编排是Linkis1.0的核心价值实现,基于Orchestrator可以实现全栈引擎+丰富计算策略的支持,通过对用户提交的任务进行编排,可以实现对双读、双写、AB等策略类型进行支持。并通过和标签进行配合可以对多种任务场景进行支持:
+- 当Orchestrator模块和Entrance进行结合的时候,可以完成对0.X的交互式计算场景进行支持;
+- 当Orchestrator模块和引擎连接器EngineConn进行结合的时候,可以完成对常驻式和一次性作业场景进行支持;
+- 当Orchestrator模块和Linkis-Client进行对接时,作为RichClient可以对存储式作业场景进行支持,如支持Hbase的双读双写;
+
+![Orchestrator01](../../Images/Architecture/orchestrator/overall/Orchestrator01.png)
+
+## 二. Orchestrator整体架构:
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Orchestrator编排整体架构参考Apache Calcite的架构进行实现将一个任务编排划分了如下几步:
+- Converter(转换):完成对用户提交的JobReq(任务请求)装换为编排的Job,该步骤会对用户提交的Job进行参数检查和信息补充,如变量替换等
+- Pareser(解析):完成对Job的解析,并拆封装Job的Stage信息,形成Ast树
+- Validator(校验): 完成对Job和Stage的信息检验,如必须的Label信息检验
+- Planner(计划):完成对Ast阶段的Job和Stage的对象转换为Logical计划,形成Logical树,将Job和Stage分别转换为LogicalTask,并封装执行单元的LogicalTask,如对于交互式的CodeLogicalUnit,转为为CodeLogicalUnitTask
+- Optimizer(优化阶段):完成对Logical Tree转换为Physical Tree,并对树进行优化,如命中缓存型的优化
+- Execution(执行):调度执行物理计划的Physical Tree,按照依赖进行执行
+- Reheater(再热):检测在执行阶段的可重试的失败Task(如ReTryExecption),调整物理计划重新执行
+- Plugins(插件): 插件模块,主要用于Orchestrator对接外部模块进行使用,如EngineConnManagerPlugin用于对接LinkisManager和EngineConn完成对引擎的申请和任务执行\
+
+![Orchestrator_arc](../../Images/Architecture/orchestrator/overall/Orchestrator_arc.png)
+
+## 三. Orchestrator实体流转:
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Orchestrator编排过程中,主要是完成对输入的JobReq进行转换,主要分为AST、Logical、Physical三个阶段,最终执行的是Physical阶段的ExecTask。整个过程如下:
+
+![orchestrator_entity](../../Images/Architecture/orchestrator/overall/orchestrator_entity.png)
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;以下以交互式场景为例简单介绍,下面以codeLogicalUnit:`select * from test`的交互式Job为例,可视化各个阶段的树形图
+1. AST阶段:由Parser对ASTJob进行解析后的结构,Job和Stage有属性进行关联,Job里面有getStage信息,Stage里面有Job信息,不是通过parents和children决定(parents和children都为null):
+
+![Orchestrator_ast](../../Images/Architecture/orchestrator/overall/Orchestrator_ast.png)
+
+2. Logical阶段:由Plan对ASTJob进行转换后的结构,包含Job/stage/CodeTask,存在树形结构,关系由parents和children进行决定\,start和end由Desc决定:
+
+![Orchestrator_Logical](../../Images/Architecture/orchestrator/overall/Orchestrator_Logical.png)
+
+3. Physical阶段:由Optimizer转换后的结构,包含Job/Stage/Code ExecTask,存在树形结构,关系由parents和children进行决定\,start和end由Desc决定:
+
+![Orchestrator_Physical](../../Images/Architecture/orchestrator/overall/Orchestrator_Physical.png)
+
+## 四. Orchestrator Core各层级模块详解
+
+### 4.1 Converter模块:
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Converter主要用于将一个JobReq转换成一个Job,并完成对JobReq的检查和补充、包括参数检查、变量补充等。JobReq是用户实际提交的一个作业,这个作业可以是交互式作业(这时Orchestrator会与Entrance进行集成,对外提供交互式访问能力),也可以是常驻式/一次性作业(这时Orchestrator会与EngineConn进行集成,直接对外提供执行能力),也可以是存储式作业,这时Orchestrator会与Client进行集成,将直接与EngineConn进行对接。相对应的JobReq有很多实现类,基于场景类型分为ComputationJobReq(交互式)、ClusteredJobReq(常驻式)和StorageJobReq(存储型)。
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 这里需区分一下Orchestrator和Entrance的职责范围,一般情况下,Orchestrator对于RichClient、Entrance、EngineConn是必需单元,但是Entrance则不是必需的,所以Converter会提供一系列的检查拦截单元,用于自定义变量的替换和CS相关文件、自定义变量的补充。
+
+### 4.2 Parser模块:
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Parser主要用于将一个Job解析为多个Stage,按照不能的计算策略,在Parser阶段生成的AstTree也会不相同,对于普通的交互式计算策略Parser会将Job解析为一个Stage,但是对于双读、双写等计算策略下会将Job解析为多个Stage,每个Stage对应的操作相同去操作不同的集群。
+
+### 4.3 Validator模块:
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; AstTree在plan生成可执行的Tasks之前,还需先经过Validator。Validator主要用于校验Ast阶段的Job和Stage的合法性,并补充一些必要的信息,例如必要标签信息检查和补充。
+
+### 4.4 Planner模块
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Planner模块主要完成对Ast阶段的Job和Stage转换为对应的LogicalTask,形式LogicalTree。Planner会构造LogicalTree,将Job解析为JobEndTask和JobStartTask,将Stage解析为StageEndTask和StageStartTask,以及将实际的执行单元转换为具体的LogicalTask(如对于交互式的CodeLogicalUnit,转为为CodeLogicalUnitTask)。如下图:
+
+![Orchestrator_Logical](../../Images/Architecture/orchestrator/overall/Orchestrator_Logical.png)
+
+### 4.5 Optimizer模块
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Optimizer是Orchestrator的优化器,主要用于优化整个LogicalTree转换为PhysicalTree的ExecTask。根据优化的类型不同,Optimizer主要分为两个步骤:第一步是完成对logciaTree的优化,第二部完成对LogicalTree的转换。已经实现的优化策略主要有以下:
+- CacheTaskOptimizer(TaskOptimizer级):判断ExecTask是否可以使用缓存的执行结果,如果命中cache,则调整Tree。
+- YarnQueueOptimizer(TaskOptimizer级):如果用户指定提交的队列现在资源很紧张,且该用户存在其他可用空闲队列,自动为用户做优化。
+- PlaybackOptimizer(TaskOptimizer级):主要用于支持回放。即多写时,如果某个集群存在需要回放的任务,先根据任务时延要求,进行一定数量的任务回放,以便追回。同时对该任务进行关联分析,如果与历史回放任务关联则改为将任务信息写入PlaybackService(或如果是select类别的不执行),不关联则继续执行。
+- ConfigurationOptimizer(StageOptimizer级):优化用户的运行时参数或启动参数。
+
+
+### 4.6 Execution模块
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Execution是Orchestrator的执行模块,用于执行PhysicalTree,支持同步执行和异步执行,执行的过程中通过解析PhysicalTree进行依赖执行。
+
+### 4.7 Reheater模块
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Reheater再热允许Execution在执行过程中,动态调整PhysicalTree的执行计划,比如为申请引擎失败的ExecTask发起重新执行等
+
+## 五. Orchestrator编排流程
+
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 对于使用方来说整体编排分为三步:
+1. 第一步通过Orchestrator获取OrchestratorSession该对象类似于SparkSession一般进程单例
+2. 第二步通过OrchestratorSession进行编排,获取Orchestration对象,编排后返回的唯一对象
+3. 第三步通过调用Orchestration 的执行方法机进行支持,支持异步和同步执行模式
+整体流程如下图所示:
+
+![Orchestrator_progress](../../Images/Architecture/orchestrator/overall/Orchestrator_progress.png)
+
+## 六. Orchestrator常用物理计划示例
+
+1. 交互式分析,拆封成两个Stage的类型
+
+![Orchestrator_computation](../../Images/Architecture/orchestrator/overall/Orchestrator_computation.png)
+
+2. Command等只有function类的ExecTask
+
+![Orchestrator_command](../../Images/Architecture/orchestrator/overall/Orchestrator_command.png)
+
+3. Reheat情型
+
+![Orchestrator_reheat](../../Images/Architecture/orchestrator/overall/Orchestrator_reheat.png)
+
+4. 事务型
+
+![Orchestrator_transication](../../Images/Architecture/orchestrator/overall/Orchestrator_transication.png)
+
+5. 命中缓存型
+
+![Orchestrator_cache](../../Images/Architecture/orchestrator/overall/Orchestrator_cache.png)
+
+
+
+
+
+
+
+
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/README.md
new file mode 100644
index 0000000..4ca01b2
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Orchestrator/README.md
@@ -0,0 +1,55 @@
+## Orchestrator 架构设计
+
+Linkis的计算编排模块,提供了全栈引擎和丰富的计算策略的支持,通过编排方式实现对双读、双写、AB等策略的支持;并且通过与标签系统整合实现对多种作业场景,例交互式计算作业、常驻式作业以及存储式作业等场景的支持。
+
+#### 架构示意图
+
+![Orchestrator架构图](../../Images/Architecture/orchestrator/linkis_orchestrator_architecture.png)  
+
+
+#### 模块介绍
+
+##### 1. Orchestrator-Core
+
+核心模块,将任务编排拆分了约七个步骤,分别对应的接口为Converter(转换), Parser(解析), Validator(校验), Planner(计划), Optimizer(优化),Execution(执行), Reheater(再热/重试),之间的实体流转图见如下:  
+![Orchestrator实体流转](../../Images/Architecture/orchestrator/overall/orchestrator_entity.png)
+
+核心的接口定义如下:
+
+| 核心顶层接口/类 | 核心功能 |
+| --- | --- | 
+| `ConverterTransform`| 完成对用户提交的req请求转换为编排的Job,同时会对请求做参数检查和信息补充 |
+| `ParserTransform`| 完成对Job的解析和拆分,拆分成多个Stage阶段信息,构成AST树 |
+| `ValidatorTransform` | 对Job和Stage的信息校验,例如对附带的Label信息的校验 |
+| `PlannerTransform` | 将AST阶段的Job和Stage转换成逻辑计划,生成Logical树,其中Job和Stage分别转换为LogicalTask |
+| `OptimizerTransform` | 完成Logical Tree到 Physical Tree的转换,既物理计划转换, 转换前还会对AST树做优化处理 |
+| `Execution` | 调度执行物理计划的Physical Tree,处理执行子作业之间的依赖关系 |
+| `ReheaterTransform` | 对Execution执行过程中可重试的失败作业的重新调度执行 |
+
+##### 2. Computation-Orchestrator
+
+是针对交互式计算场景下Orchestrator的标准实现,对抽象接口都做了默认实现,其中包含例如对SQL等语言代码的转换规则集合,以及请求执行交互式作业的具体逻辑。
+典型的类定义如下:
+
+| 核心顶层接口/类 | 核心功能 |
+| --- | --- | 
+| `CodeConverterTransform`| 针对请求中附带的代码信息的解析转换, 例如 Spark Sql, Hive Sql, Shell 和 Python|
+| `CodeStageParserTransform` | 解析拆分Job,针对CodeJob,既附带代码信息的Job|
+| `EnrichLabelParserTransform` | 解析拆分Job的同时填入标签信息 |
+| `TaskPlannerTransform` | 交互式计算场景下,将Job拆分成的Stage信息转化为逻辑计划,即Logical Tree |
+| `CacheTaskOptimizer` | 对逻辑计划中的AST树增加缓存节点,优化后续的执行 |
+| `ComputePhysicalTransform` | 交互式计算场景下,将逻辑计划转化为物理计划 |
+| `CodeLogicalUnitExecTask` | 交互式计算场景下,物理计划中的最小执行单元|
+| `ComputationTaskExecutionReceiver` | Task执行的RPC回调类,接收任务的状态、进度等回调信息|
+
+##### 3. Code-Orchestrator
+
+是针对常驻型和存储型作业场景下Orchestrator的标准实现
+
+##### 4. Plugins/Orchestrator-ECM-Plugin
+
+提供了Orchestrator对接LinkisManager 和 EngineConn所需要的接口方法,简述如下:
+
+| 核心顶层接口/类 | 核心功能 |
+| --- | --- | 
+| `EngineConnManager` | 提供了请求EngineConn资源,向EngineConn提交执行请求的方法,并主动缓存了可用的EngineConn|
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/BML.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/BML.md
new file mode 100644
index 0000000..e385cad
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/BML.md
@@ -0,0 +1,94 @@
+
+## 背景
+
+BML(物料库服务)是linkis的物料管理系统,主要用来存储用户的各种文件数据,包括用户脚本、资源文件、第三方Jar包等,也可以存储引擎运行时需要使用到的类库。
+
+具备以下功能点:
+
+1)、支持各种类型的文件。支持文本形式和二进制形式的文件,如果是在大数据领域的用户,可以将他们的脚本文件、物料压缩包都可以存储到本系统中。
+
+2)、服务无状态,多实例部署,做到服务高可用。本系统在部署的时候,可以进行多实例部署,每个实例对外独立提供服务,不会互相干扰,所有的信息都是存储在数据库中进行共享。
+
+3)、使用方式多样。提供Rest接口和SDK两种方式,用户可以根据自己的需要进行选择。
+
+4)、文件采用追加方式,避免过多的HDFS小文件。HDFS小文件多会导致HDFS整体性能的下降,我们采用了文件追加的方式,将多个版本的资源文件合成一个大文件,有效减少了HDFS的文件数量。
+
+5)、精确权限控制,用户资源文件内容安全存储。资源文件往往会有重要的内容,用户只希望自己可读
+
+6)、提供了文件上传、更新、下载等操作任务的生命周期管理。
+
+## 架构图
+
+![BML架构图](../../Images/Architecture/bml-02.png)
+
+## 架构说明
+
+1、Service层 包含资源管理、上传资源、下载资源、共享资源还有工程资源管理。
+
+资源管理负责资源的增删改查操作,访问权限控制,文件是否过期等基本操作。
+
+2、文件版本控制
+每个BML资源文件都是具有版本信息的,同一个资源每次更新操作都会产生一个新的版本,当然也支持历史版本的查询和下载操作。BML使用版本信息表记录了每个版本的资源文件HDFS存储的偏离位置和大小,可以在一个HDFS文件上存储多个版本的数据。
+
+3、资源文件存储
+主要使用HDFS文件作为实际的数据存储,HDFS文件可以有效保证物料库文件不被丢失,文件采用追加方式,避免过多的HDFS小文件。
+
+### 核心流程
+
+**上传文件:**
+
+1.  判断用户上传文件的操作类型,属于首次上传还是更新上传,如果是首次上传需要新增一条资源信息记录,系统已经为这个资源生成了一个全局唯一标识的resource_id和一个资源放置的位置resource_location。资源A的第一个版本A1需要在HDFS文件系统中resource_location位置进行存储。存储完之后,就可以得到第一个版本记为V00001,如果是更新上传需要查找上次最新的版本。
+
+2.  上传文件流到指定的HDFS文件,如果是更新则采用文件追加的方式加到上次内容的末尾。
+
+3.  新增一条版本记录,每次上传都会产生一条新的版本记录。除了记录这个版本的元数据信息外,最重要的是记录了该版本的文件的存储位置,包括文件路径,起始位置,结束位置。
+
+**下载文件:**
+
+1.  用户下载资源的时候,需要指定两个参数一个是resource_id,另外一个是版本version,如果不指定version的话,默认下载最新版本。
+
+2.  用户传入resource_id和version两个参数到系统之后,系统查询resource_version表,查到对应的resource_location和start_byte和end\_byte进行下载,通过流处理的skipByte方法,将resource\_location的前(start_byte-1)个字节跳过,然后读取到end_byte
+    字节数。读取成功之后,将流信息返回给用户。
+
+3.  在resource_download_history中插入一条下载成功的记录
+
+## 数据库设计
+
+1、资源信息表(resource)
+
+| 字段名            | 作用                         | 备注                             |
+|-------------------|------------------------------|----------------------------------|
+| resource_id       | 全局唯一标识一个资源的字符串 | 可以采用UUID进行标识             |
+| resource_location | 存放资源的位置               | 例如 hdfs:///tmp/bdp/\${用户名}/ |
+| owner             | 资源的所属者                 | 例如 zhangsan                    |
+| create_time       | 记录创建时间                 |                                  |
+| is_share          | 是否共享                     | 0表示不共享,1表示共享           |
+| update\_time      | 资源最后的更新时间           |                                  |
+| is\_expire        | 记录资源是否过期             |                                  |
+| expire_time       | 记录资源过期时间             |                                  |
+
+2、资源版本信息表(resource_version)
+
+| 字段名            | 作用               | 备注     |
+|-------------------|--------------------|----------|
+| resource_id       | 唯一标识资源       | 联合主键 |
+| version           | 资源文件的版本     |          |
+| start_byte        | 资源文件开始字节数 |          |
+| end\_byte         | 资源文件结束字节数 |          |
+| size              | 资源文件大小       |          |
+| resource_location | 资源文件放置位置   |          |
+| start_time        | 记录上传的开始时间 |          |
+| end\_time         | 记录上传的结束时间 |          |
+| updater           | 记录更新用户       |          |
+
+3、资源下载历史表(resource_download_history)
+
+| 字段        | 作用                      | 备注                           |
+|-------------|---------------------------|--------------------------------|
+| resource_id | 记录下载资源的resource_id |                                |
+| version     | 记录下载资源的version     |                                |
+| downloader  | 记录下载的用户            |                                |
+| start\_time | 记录下载时间              |                                |
+| end\_time   | 记录结束时间              |                                |
+| status      | 记录是否成功              | 0表示成功,1表示失败           |
+| err\_msg    | 记录失败原因              | null表示成功,否则记录失败原因 |
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md
new file mode 100644
index 0000000..d28cbe2
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Cache.md
@@ -0,0 +1,95 @@
+## **CSCache架构**
+### **需要解决的问题**
+
+###  1.1. 内存结构需要解决的问题:
+
+1. 支持按ContextType进行拆分:加快存储和查询性能
+
+2. 支持按不同得ContextID进行拆分:需要完成ContextID见元数据隔离
+
+3. 支持LRU:按照特定算法进行回收
+
+4. 支持按关键字进行检索:支持通过关键字进行索引
+
+5. 支持索引:支持直接通过ContextKey进行索引
+
+6. 支持遍历:需要支持通过按照ContextID、ContextType进行遍历
+
+###  1.2 加载与解析需要解决的问题:
+
+1. 支持将ContextValue解析成内存数据结构:需要完成对ContextKey和value解析出对应的关键字。
+
+2. 需要与与Persistence模块进行对接完成ContextID内容的加载与解析
+
+###  1.3 Metric和清理机制需要解决的问题:
+
+1. 当JVM内存不够时能够基于内存使用和使用频率的清理
+
+2. 支持统计每个ContextID的内存使用情况
+
+3. 支持统计每个ContextID的使用频率
+
+## **ContextCache架构**
+
+ContextCache的架构如下图展示:
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-01.png)
+
+1.  ContextService:完成对外接口的提供,包括增删改查;
+
+2.  Cache:完成对上下文信息的存储,通过ContextKey和ContextValue进行映射存储
+
+3.  Index:建立的关键字索引,存储的是上下文信息的关键字和ContextKey的映射;
+
+4.  Parser:完成对上下文信息的关键字解析;
+
+5.  LoadModule当ContextCache没有对应的ContextID信息时从持久层完成信息的加载;
+
+6.  AutoClear:当Jvm内存不足时完成对ContextCache进行按需清理;
+
+7.  Listener:用于手机ContextCache的Metric信息,如:内存占用、访问次数。
+
+## **ContextCache存储结构设计**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-02.png)
+
+ContextCache的存储结构划分为了三层结构:
+
+**ContextCach:**存储了ContextID和ContextIDValue的映射关系,并能够完成ContextID按照LRU算法进行回收;
+
+**ContextIDValue:**拥有存储了ContextID的所有上下文信息和索引的CSKeyValueContext。并统计ContestID的内存和使用记录。
+
+**CSKeyValueContext:**包含了按照类型存储并支持关键词的CSInvertedIndexSet索引集,还包含了存储ContextKey和ContextValue的存储集CSKeyValueMapSet。
+
+CSInvertedIndexSet:通过CSType进行分类存储关键词索引
+
+CSKeyValueMapSet:通过CSType进行分类存储上下文信息
+
+## **ContextCache UML类图设计**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-03.png)
+
+## **ContextCache 时序图**
+
+下面的图绘制了以ContextID、KeyWord、ContextType去ContextCache中查对应的ContextKeyValue的整体流程。
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-04.png)
+
+说明:其中ContextIDValueGenerator会去持久层拉取ContextID的Array[ContextKeyValue],并通过ContextKeyValueParser解析ContextKeyValue的关键字存储索引和内容。
+
+ContextCacheService提供的其他接口流程类似,这里不再赘述。
+
+## **KeyWord解析逻辑**
+
+ContextValue具体的实体Bean需要在对应可以作为keyword的get方法上面使用注解\@keywordMethod,比如Table的getTableName方法必须加上\@keywordMethod注解。
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-cache-05.png)
+
+ContextKeyValueParser在解析ContextKeyValue的时候,会去扫描传入的具体对象的所有被KeywordMethod修饰的注解并调用该get方法获得返回对象toString并会通过用户可选的规则进行解析,存入keyword集合里面。规则有分隔符,和正则表达式
+
+注意事项:
+
+1.  该注解会定义到cs的core模块
+
+2.  被修饰的Get方法不能带参数
+
+3.  Get方法的返回对象的toSting方法必须返回的是关键字
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md
new file mode 100644
index 0000000..d72a37c
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Client.md
@@ -0,0 +1,61 @@
+## **CSClient设计的思路和实现**
+
+
+CSClient是每一个微服务和CSServer组进行交互的客户端,CSClient需要满足下面的功能。
+
+1.  微服务向cs-server申请一个上下文对象的能力
+
+2.  微服务向cs-server注册上下文信息的能力
+
+3.  微服务能够向cs-server更新上下文信息的能力
+
+4.  微服务向cs-server获取上下文信息的能力
+
+5.  某一些特殊的微服务能够嗅探到cs-server中已经修改了上下文信息的操作
+
+6.  CSClient在csserver集群都失败的情况下能够给出明确的指示
+
+7.  CSClient需要提供复制csid1所有上下文信息为一个新的csid2用来提供给调度执行的
+
+>   总体的做法是通过的linkis自带的linkis-httpclient进行发送http请求,通过实现各种Action和Result的实体类进行发送请求和接收响应。
+
+### 1. 申请上下文对象的能力
+
+申请上下文对象,例如用户在前端新建了一条工作流,dss-server需要向dss-server申请一个上下文对象,申请上下文对象的时候,需要将工作流的标识信息(工程名、工作流名)通过CSClient发送到CSServer中(这个时候gateway应该是随机发送给一个的,因为此时没有携带csid的信息),申请上下文一旦反馈到正确的结果之后,就会返回一个csid和该工作流进行绑定。
+
+### 2. 注册上下文信息的能力
+
+>   注册上下文的能力,例如用户在前端页面上传了资源文件,文件内容上传到dss-server,dss-server将内容存储到bml中,然后需要将从bml中获得的resourceid和version注册到cs-server中,此时需要使用到csclient的注册的能力,注册的能力是通过传入csid,以及cskey
+>   和csvalue(resourceid和version)进行注册。
+
+### 3. 更新注册的上下文的能力
+
+>   更新上下文信息的能力。举一个例子,比如一个用户上传了一个资源文件test.jar,此时csserver已经有注册的信息,如果用户在编辑工作流的时候,将这个资源文件进行了更新,那么cs-server需要将这个内容进行更新。此时需要调用csclient的更新的接口
+
+### 4. 获取上下文的能力
+
+注册到csserver的上下文信息,在变量替换、资源文件下载、下游节点调用上游节点产生信息的时候,都是需要被读取的,例如engine端在执行代码的时候,需要进行下载bml的资源,此时需要通过csclient和csserver进行交互,获取到文件在bml中的resourceid和version然后再进行下载。
+
+### 5. 某一些特殊的微服务能够嗅探到cs-server中已经修改了上下文信息的操作
+
+这个操作是基于以下的例子,比如一个widget节点和上游的sql节点是有很强的联动性,用户在sql节点中写了一个sql,sql的结果集的元数据为a,b,c三个字段,后面的widget节点绑定了这个sql,能够在页面中进行对这三个字段的编辑,然后用户更改了sql的语句,元数据变成了a,b,c,d四个字段,此时用户需要手动刷新一下才行。我们希望做到如果脚本做到了改变,那么widget节点能够自动的进行将元数据进行更新。这个一般采用的是listener模式,为了简便,也可以采用心跳的机制进行轮询。
+
+### 6. CSClient需要提供复制csid1所有上下文信息为一个新的csid2用来提供给调度执行的
+
+用户一旦发布一个工程,就是希望对这个工程的所有信息进行类似于git打上一个tag,这里的资源文件、自定义变量这些都是不会再变的,但是有一些动态信息,如产生的结果集等还是会更新csid的内容。所以csclient需要提供一个csid1复制所有上下文信息的接口以供微服务进行调用
+
+## **ClientListener模块的实现**
+
+对于一个client而言,有时候会希望在尽快的时间内知道某一个csid和cskey在cs-server中发生了改变,例如visualis的csclient需要能够知道上一个sql节点进行了改变,那么需要被通知到,服务端有一个listener模块,而客户端也需要一个listener模块,例如一个client希望能够监听到某一个csid的某一个cskey的变化,那么他需要将该cskey注册到对应的csserver实例中的callbackEngine,后续的比如有另外一个client进行更改了该cskey的内容,第一个client进行了heatbeat的时候,callbackengine就需要将这个信息通知到已经client监听的所有cskey,这样的话,第一个client就知道了该cskey的内容已经发生了变化。当heatbeat返回数据的时候,我们就应该通知到注册到ContextClientListenerBus的所有的listener进行使用on方法
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-01.png)
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-02.png)
+
+## **GatewayRouter的实现**
+
+
+Gateway插件实现Context进行转发Gateway的插件的转发逻辑是通过的GatewayRouter进行的,需要分成两种方式进行,第一种是申请一个context上下文对象的时候,这个时候,CSClient携带的信息中是没有包含csid的信息的,此时的判断逻辑应该是通过eureka的注册信息,第一次发送的请求将会随机进入到一个微服务实例中。  
+第二种情况是携带了ContextID的内容,我们需要将csid进行解析,解析的方式就是通过字符串切割的方法,获取到每一个instance的信息,然后通过instance的信息通过eureka判断是否还存在这个微服务,如果是存在的,就往这个微服务实例进行发送
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-client-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md
new file mode 100644
index 0000000..05a165f
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_HighAvailable.md
@@ -0,0 +1,86 @@
+## **CS HA架构设计**
+
+### 1,CS HA架构概要
+
+#### (1)CS HA架构图
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-01.png)
+
+#### (2)要解决的问题
+
+-   Context instance对象的HA
+
+-   Client创建工作流时生成CSID请求
+
+-   CS Server的别名列表
+
+-   CSID统一的生成和解析规则
+
+#### (3)主要设计思路
+
+①负载均衡
+
+当客户端创建新的工作流时,等概率随机请求到某台Server的HA模块生成新的HAID,HAID信息包含该主Server信息(以下称主instance),和备选instance,其中备选instance为剩余Server中负载最低的instance,以及一个对应的ContextID。生成的HAID与该工作流绑定且被持久化到数据库,并且随后该工作流所有变更操作请求都将发送至主instance,实现负载的均匀分配。
+
+②高可用
+
+在后续操作中,当客户端或者gateway判定主instance不可用时,会将操作请求转发至备instance处理,从而实现服务的高可用。备instance的HA模块会根据HAID信息首先验证请求合法性。
+
+③别名机制
+
+对机器采用别名机制,HAID中包含的Instance信息采用自定义别名,后台维护别名映射队列。在于客户端交互时采用HAID,而与后台其它组件交互则采用ContextID,在实现具体操作时采用动态代理机制,将HAID转换为ContextID进行处理。
+
+### 2,模块设计
+
+#### (1)模块图
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-02.png)
+
+#### (2)具体模块
+
+①ContextHAManager模块
+
+提供接口供CS Server调用生成CSID及HAID,并提供基于动态代理的别名转换接口;
+
+调用持久化模块接口持久化CSID信息;
+
+②AbstractContextHAManager模块
+
+ContextHAManager的抽象,可支持实现多种ContextHAManager;
+
+③InstanceAliasManager模块
+
+RPC模块提供Instance与别名转换接口,维护别名映射队列,并提供别名与CS
+Server实例的查询;提供验证主机是否有效接口;
+
+④HAContextIDGenerator模块
+
+生成新的HAID,并且封装成客户端约定格式返回给客户端。HAID结构如下:
+
+\${第一个instance长度}\${第二个instance长度}{instance别名1}{instance别名2}{实际ID},实际ID定为ContextID
+Key;
+
+⑤ContextHAChecker模块
+
+提供HAID的校验接口。收到的每个请求会校验ID格式是否有效,以及当前主机是否为主Instance或备Instance:如果是主Instance,则校验通过;如果为备Instance,则验证主Instance是否失效,主Instance失效则验证通过。
+
+⑥BackupInstanceGenerator模块
+
+生成备用实例,附加在CSID信息里;
+
+⑦MultiTenantBackupInstanceGenerator接口
+
+(保留接口,暂不实现)
+
+### 3. UML类图
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-03.png)
+
+### 4. HA模块操作时序图
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-ha-04.png)
+
+第一次生成CSID:
+由客户端发出请求,Gateway转发到任一Server,HA模块生成HAID,包含主Instance和备instance及CSID,完成工作流与HAID的绑定。
+
+当客户端发送变更请求时,Gateway判定主Instance失效,则将请求转发到备Instance进行处理。备Instance上实例验证HAID有效后,加载Instance并处理请求。
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md
new file mode 100644
index 0000000..74329c1
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Listener.md
@@ -0,0 +1,33 @@
+## **Listener架构**
+
+在DSS中,当某个节点更改了它的元数据信息后,则整个工作流的上下文信息就发生了改变,我们期望所有的节点都能感知到变化,并自动进行元数据更新。我们采用监听模式来实现,并使用心跳机制进行轮询,保持上下文信息的元数据一致性。
+
+### **客户端 注册自己、注册CSKey及更新CSKey过程**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-listener-01.png)
+
+主要过程如下:
+
+1、注册操作:客户端client1、client2、client3、client4通过HTPP请求分别向csserver注册自己以及想要监听的CSKey,Service服务通过对外接口获取到callback引擎实例,注册客户端及其对应的CSKeys。
+
+2、更新操作:如ClientX节点更新了CSKey内容,Service服务则更新ContextCache缓存的CSKey,ContextCache将更新操作投递给ListenerBus,ListenerBus通知具体的listener进行消费(即ContextKeyCallbackEngine去更新Client对应的CSKeys),超时未消费的事件,会被自动移除。
+
+3、心跳机制:
+
+所有Client通过心跳信息探测ContextKeyCallbackEngine中CSKeys的值是否发生了变化。
+
+ContextKeyCallbackEngine通过心跳机制返回更新的CSKeys值给所有已注册的客户端。如果有客户端心跳超时,则移除该客户端。
+
+### **Listener UM类图**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png)
+
+接口:ListenerManager
+
+对外:提供ListenerBus,用于投递事件。
+
+对内:提供 callback引擎,进行事件的具体注册、访问、更新,及心跳处理等逻辑
+
+## **Listener callbackengine时序图**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png)
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md
new file mode 100644
index 0000000..13fae2f
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Persistence.md
@@ -0,0 +1,8 @@
+## **CSPersistence架构**
+
+### Persistence UML图
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-persistence-01.png)
+
+
+Persistence模块主要定义了ContextService持久化相关操作。实体主要包含CSID、ContextKeyValue相关、CSResource相关、CSTable相关。
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md
new file mode 100644
index 0000000..073cfd7
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Search.md
@@ -0,0 +1,127 @@
+## **CSSearch架构**
+### **总体架构**
+
+如下图所示:
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-01.png)
+
+1.  ContextSearch:查询入口,接受Map形式定义的查询条件,根据条件返回相应的结果。
+
+2.  构建模块:每个条件类型对应一个Parser,负责将Map形式的条件转换成Condition对象,具体通过调用ConditionBuilder的逻辑实现。具有复杂逻辑关系的Condition会通过ConditionOptimizer进行基于代价的算法优化查询方案。
+
+3.  执行模块:从Cache中,筛选出与条件匹配的结果。根据查询目标的不同,分为Ruler、Fetcher和Match而三种执行模式,具体逻辑在后文描述。
+
+4.  评估模块:负责条件执行代价的计算和历史执行状况的统计。
+
+### **查询条件定义(ContextSearchCondition)**
+
+一个查询条件,规定了该如何从一个ContextKeyValue集合中,筛选出符合条件的那一部分。查询条件可以通过逻辑运算构成更加复杂的查询条件。
+
+1.  支持ContextType、ContextScope、KeyWord的匹配
+
+    1.  分别对应一个Condition类型
+
+    2.  在Cache中,这些都应该有相应的索引
+
+2.  支持对key的contains/regex匹配模式
+
+    1.  ContainsContextSearchCondition:包含某个字符串
+
+    2.  RegexContextSearchCondition:匹配某个正则表达式
+
+3.  支持or、and和not的逻辑运算
+
+    1.  一元运算UnaryContextSearchCondition:
+
+>   支持单个参数的逻辑运算,比如NotContextSearchCondition
+
+1.  二元运算BinaryContextSearchCondition:
+
+>   支持两个参数的逻辑运算,分别定义为LeftCondition和RightCondition,比如OrContextSearchCondition和AndContextSearchCondition
+
+1.  每个逻辑运算均对应一个上述子类的实现类
+
+2.  该部分的UML类图如下:
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-02.png)
+
+### **查询条件的构建**
+
+1.  支持通过ContextSearchConditionBuilder构建:构建时,如果同时声明多项ContextType、ContextScope、KeyWord、contains/regex的匹配,自动以And逻辑运算连接
+
+2.  支持Condition之间进行逻辑运算,返回新的Condition:And,Or和Not(考虑condition1.or(condition2)的形式,要求Condition顶层接口定义逻辑运算方法)
+
+3.  支持通过每个底层实现类对应的ContextSearchParser从Map构建
+
+### **查询条件的执行**
+
+1.  查询条件的三种作用方式:
+
+    1.  Ruler:从一个Array中筛选出符合条件的ContextKeyValue子Array
+
+    2.  Matcher:判断单个ContextKeyValue是否符合条件
+
+    3.  Fetcher:从ContextCache里筛选出符合条件的ContextKeyValue的Array
+
+2.  每个底层的Condition都有对应的Execution,负责维护相应的Ruler、Matcher、Fetcher。
+
+### **查询入口ContextSearch**
+
+提供search接口,接收Map作为参数,从Cache中筛选出对应的数据。
+
+1.  通过Parser,将Map形式的条件转换为Condition对象
+
+2.  通过Optimizer,获取代价信息,并根据代价信息确定查询的先后顺序
+
+3.  通过对应的Execution,执行相应的Ruler/Fetcher/Matcher逻辑后,得到搜索结果
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-03.png)
+
+### **查询优化**
+
+1.  OptimizedContextSearchCondition维护条件的Cost和Statistics信息:
+
+    1.  Cost信息:由CostCalculator负责判断某个Condition是否能够计算出Cost,如果可以计算,则返回对应的Cost对象
+
+    2.  Statistics信息:开始/结束/执行时间、输入行数、输出行数
+
+2.  实现一个CostContextSearchOptimizer,其optimize方法以Condition的代价为依据,对Condition进行调优,转换为一个OptimizedContextSearchCondition对象。具体逻辑描述如下:
+
+    1.  将一个复杂的Condition,根据逻辑运算的组合,拆解成一个树形结构,每个叶子节点均为一个最基本的简单Condition;每个非叶子节点均为一个逻辑运算。
+
+>   如下图所示的树A,就是一个由ABCDE这五个简单条件,通过各种逻辑运算组合成的一个复杂条件。
+
+![](./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-04.png)
+<center>(树A)</center>
+
+1.  这些Condition的执行,事实上就是深度优先、从左到右遍历这个树。而且根据逻辑运算的交换规律,Condition树中一个节点的子节点的左右顺序可以互换,因此可以穷举出所有可能的执行顺序下的所有可能的树。
+
+>   如下图所示的树B,就是上述树A的另一个可能的顺序,与树A的执行结果完全一致,只是各部分的执行顺序有所调整。
+
+![](./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-05.png)
+<center>(树B)</center>
+
+1.  针对每一个树,从叶子节点开始计算代价,归集到根节点,即为该树的最终代价,最终得出代价最小的那个树,作为最优执行顺序。
+
+>   计算节点代价的规则如下:
+
+1.  针对叶子节点,每个节点有两个属性:代价(Cost)和权重(Weight)。Cost即为CostCalculator计算出的代价,Weight是根据节点执行先后顺序的不同赋予的,当前默认左边为1,右边为0.5,后续看如何调整(赋予权重的原因是,左边的条件在一些情况下已经可以直接决定整个组合逻辑的匹配与否,所以右边的条件并非所有情况下都要执行,实际开销就需要减少一定的比例)
+
+2.  针对非叶子节点,Cost=所有子节点的Cost×Weight的总和;Weight的赋予逻辑与叶子节点一致。
+
+>   以树A和树B为例子,分别计算出这两个树的代价,如下图所示,节点中的数字为Cost\|Weight,假设ABCDE这5个简单条件的Cost为10、100、50、10和100。由此可以得出,树B的代价小于树A,为更优方案。
+
+
+<center class="half">
+    <img src="./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-06.png" width="300"> <img src="./../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-search-07.png" width="300">
+</center>
+
+1.  用CostCalculator衡量简单条件的Cost的思路:
+
+    1.  作用在索引上的条件:根据索引值的分布来确定代价。比如当条件A从Cache中get出来的Array长度是100,条件B为200,那么条件A的代价小于B。
+
+    2.  需要遍历的条件:
+
+        1.  根据条件本身匹配模式给出一个初始Cost:如Regex为100,Contains为10等(具体数值等实现时根据情况调整)
+
+        2.  根据历史查询的效率,在初始Cost的基础上进行不断调整后,得到实时的Cost。单位时间吞吐量
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md
new file mode 100644
index 0000000..7e66f9c
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/ContextService_Service.md
@@ -0,0 +1,55 @@
+## **ContextService架构**
+
+### **水平划分**
+
+从水平上划分为三个模块:Restful,Scheduler,Service
+
+#### Restful职责:
+
+    将请求封装为httpjob提交到Scheduler
+
+#### Scheduler职责:
+
+    通过httpjob的protocol的ServiceName找到相应的服务执行这个job
+
+#### Service职责:
+
+    真正执行请求逻辑的模块,封装ResponseProtocol,并唤醒Restful中wait的线程
+
+### **垂直划分**
+从垂直上划分为4个模块:Listener,History,ContextId,Context:
+
+#### Listener职责:
+
+1.  负责Client端的注册和绑定(写入数据库和在CallbackEngine中进行注册)
+
+2.  心跳接口,通过CallbackEngine返回Array[ListenerCallback]
+
+#### History职责:
+创建和移除history,操作Persistence进行DB持久化
+
+#### ContextId职责:
+主要是对接Persistence进行ContextId的创建,更新移除等操作
+
+#### Context职责:
+
+1.  对于移除,reset等方法,先操作Persistence进行DB持久化,并更新ContextCache
+
+2.  封装查询condition去ContextSearch模块获取相应的ContextKeyValue数据
+
+请求访问步骤如下图:
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-01.png)
+
+## **UML类图** 
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-02.png)
+
+## **Scheduler线程模型**
+
+需要保证Restful的线程池不被填满
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-03.png)
+
+时序图如下:
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-service-04.png)
+
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md
new file mode 100644
index 0000000..fc64eb4
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/ContextService/README.md
@@ -0,0 +1,124 @@
+## **背景**
+
+### **什么是上下文Context?**
+
+保持某种操作继续进行的所有必需信息。如:同时看三本书,每本书已翻看的页码就是继续看这本书的上下文。
+
+### **为什么需要CS(Context Service)?**
+
+CS,用于解决一个数据应用开发流程,跨多个系统间的数据和信息共享问题。
+
+例如,B系统需要使用A系统产生的一份数据,通常的做法如下:
+
+1.  B系统调用A系统开发的数据访问接口;
+
+2.  B系统读取A系统写入某个共享存储的数据。
+
+有了CS之后,A和B系统只需要与CS交互,将需要共享的数据和信息写入到CS,需要读取的数据和信息从CS中读出即可,无需外部系统两两开发适配,极大降低了系统间信息共享的调用复杂度和耦合度,使各系统的边界更加清晰。
+
+## **产品范围**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-01.png)
+
+
+### 元数据上下文
+
+元数据上下文定义元数据规范。
+
+元数据上下文依托于数据中间件,主要功能如下:
+
+1.  打通与数据中间件的关系,能拿到所有的用户元数据信息(包括Hive表元数据、线上库表元数据、其他NOSQL如HBase、Kafka等元数据)
+
+2.  所有节点需要访问元数据时,包括已有元数据和应用模板内元数据,都必须经过元数据上下文。元数据上下文记录了应用模板所使用的所有元数据信息。
+
+3.  各节点所产生的新元数据,都必须往元数据上下文注册。
+
+4.  抽出应用模板时,元数据上下文为应用模板抽象(主要是将用到的多个库表做成\${db}.表形式,避免数据权限问题)和打包所有依赖的元数据信息。
+
+元数据上下文是交互式工作流的基础,也是应用模板的基础。设想:Widget定义时,如何知道DataWrangler定义的各指标维度?Qualitis如何校验Widget产生的图报表?
+
+### 数据上下文
+
+数据上下文定义数据规范。
+
+数据上下文依赖于数据中间件和Linkis计算中间件。主要功能如下:
+
+1.  打通数据中间件,拿到所有用户数据信息。
+
+2.  打通计算中间件,拿到所有节点的数据存储信息。
+
+3.  所有节点需要写临时结果时,必须通过数据上下文,由数据上下文统一分配。
+
+4.  所有节点需要访问数据时,必须通过数据上下文。
+
+5.  数据上下文会区分依赖数据和生成数据,在抽出应用模板时,为应用模板抽象和打包所有依赖的数据。
+
+### 资源上下文
+
+资源上下文定义资源规范。
+
+资源上下文主要与Linkis计算中间件交互。主要功能如下:
+
+1.  用户资源文件(如Jar、Zip文件、properties文件等)
+
+2.  用户UDF
+
+3.  用户算法包
+
+4.  用户脚本
+
+### 环境上下文
+
+环境上下文定义环境规范。
+
+主要功能如下:
+
+1.  操作系统
+
+2.  软件,如Hadoop、Spark等
+
+3.  软件包依赖,如Mysql-JDBC。
+
+### 对象上下文
+
+运行时上下文为应用模板(工作流)在定义和执行时,所保留的所有上下文信息。
+
+它用于协助定义工作流/应用模板,在工作流/应用模板执行时提示和完善所有必要信息。
+
+运行时工作流主要是Linkis使用。
+
+
+## **CS架构图**
+
+![](../../../Images/Architecture/Public_Enhancement_Service/ContextService/linkis-contextservice-02.png)
+
+## **架构说明:**
+
+### 1.  Client
+外部访问CS的入口,Client模块提供HA功能;
+[进入Client架构设计](ContextService_Client.md)
+
+### 2.  Service模块
+提供Restful接口,封装和处理客户端提交的CS请求;
+[进入Service架构设计](ContextService_Service.md)
+
+### 3.  ContextSearch
+上下文查询模块,提供丰富和强大的查询能力,供客户端查找上下文的Key-Value键值对;
+[进入ContextSearch架构设计](ContextService_Search.md)
+
+### 4.  Listener
+CS的监听器模块,提供同步和异步的事件消费能力,具备类似Zookeeper的Key-Value一旦更新,实时通知Client的能力;
+[进入Listener架构设计](ContextService_Listener.md)
+
+### 5.  ContextCache
+上下文的内存缓存模块,提供快速检索上下文的能力和对JVM内存使用的监听和清理能力;
+[进入ContextCache架构设计](ContextService_Cache.md)
+
+### 6.  HighAvailable
+提供CS高可用能力;
+[进入HighAvailable架构设计](ContextService_HighAvailable.md)
+
+### 7.  Persistence
+CS的持久化功能;
+[进入Persistence架构设计](ContextService_Persistence.md)
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/DataSource.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/DataSource.md
new file mode 100644
index 0000000..53b4740
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/DataSource.md
@@ -0,0 +1 @@
+待上传
\ No newline at end of file
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/PublicService.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/PublicService.md
new file mode 100644
index 0000000..71dc115
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/PublicService.md
@@ -0,0 +1,31 @@
+
+## **背景**
+
+PublicService公共服务是由configuration、jobhistory、udf、variable等多个子模块组成的综合性服务。Linkis
+1.0在0.9版本的基础上还新增了标签管理。Linkis在用户不同作业执行过程中,不是每次执行都需要去设置一遍参数,很多可以复用的变量,函数,配置都是用户在完成一次设置后,能够被复用起来,当然还可以共享给别的用户使用。
+
+## **架构图**
+
+![](../../Images/Architecture/linkis-publicService-01.png)
+
+## **架构说明**
+
+1. linkis-configuration:对外提供了全局设置和通用设置的查询和保存操作,特别是引擎配置参数
+
+2. linkis-jobhistory:专门用于历史执行任务的存储和查询,用户可以通过jobhistory提供的接口获取历史任务
+    的执行情况。包括日志、状态、执行内容等。同时历史任务还支持了分页查询操作,对于管理员可以查看所有的历史任务,普通用户只能查看自己的历史任务。
+3. Linkis-udf:提供linkis的用户函数管理功能,具体可分为共享函数、个人函数、系统函数,以及函数使用的引擎,用户勾选后会在引擎启动的时候被自动加载。供用户在代码中直接引用和不同的脚本间进行函数复用。
+
+4. Linkis-variable:提供linkis的全局变量管理能力,存储用户定义的全局变量,查询用户定义的全局变量。
+
+5. linkis-instance-label:提供了label server 和label
+    client两个模块,为Engine和EM打标签,提供基于节点的标签增删改查能力。主要功能如下:
+
+-   为一些特定的标签,提供资源管理能力,协助RM在资源管理层面更加精细化
+
+-   为用户提供标签能力。为一些用户打上标签,这样在引擎申请时,会自动加上这些标签判断
+
+-   提供标签解析模块,能将用户的请求,解析成一堆标签。
+
+-   具备节点标签管理的能力,主要用于提供节点的标签CRUD能力,还有标签资源管理用于管理某些标签的资源,标记一个Label的最大资源、最小资源和已使用资源。
+
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/README.md
new file mode 100644
index 0000000..a980e5b
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/Public_Enhancement_Services/README.md
@@ -0,0 +1,91 @@
+PublicEnhencementService(PS)架构设计
+======================================
+
+PublicEnhancementService(PS):公共增强服务,为其他微服务模块提供统一配置管理、上下文服务、物理库、数据源管理、微服务管理和历史任务查询等功能的模块。
+
+![](../../Images/Architecture/PublicEnhencement架构图.png)
+
+二级模块介绍:
+==============
+
+BML物料库
+---------
+
+是linkis的物料管理系统,主要用来存储用户的各种文件数据,包括用户脚本、资源文件、第三方Jar包等,也可以存储引擎运行时需要使用到的类库。
+
+| 核心类          | 核心功能                           |
+|-----------------|------------------------------------|
+| UploadService   | 提供资源上传服务                   |
+| DownloadService | 提供资源下载服务                   |
+| ResourceManager | 提供了上传、下载资源的统一管理入口 |
+| VersionManager  | 提供了资源版本标记和版本管理功能   |
+| ProjectManager  | 提供了项目级的资源管控能力         |
+
+Configuration统一配置管理
+-------------------------
+
+Configuration提供了“用户—引擎—应用”三级配置管理方案,实现了为用户提供配置各种接入应用下自定义引擎参数的功能。
+
+| 核心类               | 核心功能                       |
+|----------------------|--------------------------------|
+| CategoryService      | 提供了应用和引擎目录的管理服务 |
+| ConfigurationService | 提供了用户配置统一管理服务     |
+
+ContextService上下文服务
+------------------------
+
+ContextService用于解决一个数据应用开发流程,跨多个系统间的数据和信息共享问题。
+
+| 核心类              | 核心功能                                 |
+|---------------------|------------------------------------------|
+| ContextCacheService | 提供对上下文信息缓存服务                 |
+| ContextClient       | 提供其他微服务和CSServer组进行交互的能力 |
+| ContextHAManager    | 为ContextService提供高可用能力           |
+| ListenerManager     | 提供消息总线的能力                       |
+| ContextSearch       | 提供了查询入口                           |
+| ContextService      | 实现了上下文服务总体的执行逻辑           |
+
+Datasource数据源管理
+--------------------
+
+Datasource为其他微服务提供不同数据源连接的能力。
+
+| 核心类            | 核心功能                 |
+|-------------------|--------------------------|
+| datasource-server | 提供不同数据源连接的能力 |
+
+InstanceLabel微服务管理
+-----------------------
+
+InstanceLabel为其他接入linkis的微服务提供注册和标签功能。
+
+| 核心类          | 核心功能                       |
+|-----------------|--------------------------------|
+| InsLabelService | 提供微服务注册和标签管理的功能 |
+
+Jobhistory历史任务管理
+----------------------
+
+Jobhistory为用户提供了linkis历史任务查询、进度、日志展示的相关功能,为管理员提供统一历史任务视图。
+
+| 核心类                 | 核心功能             |
+|------------------------|----------------------|
+| JobHistoryQueryService | 提供历史任务查询服务 |
+
+Variable用户自定义变量管理
+--------------------------
+
+Variable为用户提供自定义变量存储和使用的相关功能。
+
+| 核心类          | 核心功能                           |
+|-----------------|------------------------------------|
+| VariableService | 提供自定义变量存储和使用的相关功能 |
+
+UDF用户自定义函数管理
+---------------------
+
+UDF为用户提供自定义函数的功能,用户可以在在编写代码时自行引入。
+
+| 核心类     | 核心功能               |
+|------------|------------------------|
+| UDFService | 提供用户自定义函数服务 |
diff --git a/Linkis-Doc-master/zh_CN/Architecture_Documents/README.md b/Linkis-Doc-master/zh_CN/Architecture_Documents/README.md
new file mode 100644
index 0000000..b28cec0
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Architecture_Documents/README.md
@@ -0,0 +1,24 @@
+## 1. 文档结构
+Linkis 1.0 将所有微服务总体划分为三大类:公共增强服务、计算治理服务、微服务治理服务。如下图所示为Linkis 1.0 的架构图。
+
+![Linkis1.0架构图](./../Images/Architecture/Linkis1.0-architecture.png)
+
+
+各大类的具体职责如下:
+
+1. 公共增强服务为 Linkis 0.X 已经提供的物料库服务、上下文服务、数据源服务和公共服务等;
+    
+2. 微服务治理服务为 Linkis 0.X 已经提供的 Spring Cloud Gateway、Eureka 和 Open Feign,同时 Linkis1.0 还会提供对 Nacos 的支持;
+    
+3. 计算治理服务是 Linkis 1.0 的核心重点,从 提交 —> 准备 —> 执行三个阶段,来全面升级 Linkis 对 用户任务的执行管控能力。
+
+以下是 Linkis1.0 架构文档的目录列表:
+
+1. Linkis1.0在架构上的特点,请阅读[Linkis1.0与Linkis0.x的区别](Linkis1.0与Linkis0.X的区别简述.md)。
+
+2. Linkis1.0公共增强服务相关文档,请阅读[公共增强服务](Public_Enhancement_Services/README.md)。
+
+3. Linkis1.0微服务治理相关文档,请阅读[微服务治理](Microservice_Governance_Services/README.md)。
+
+4. Linkis1.0提出的计算治理服务相关文档,请阅读 [计算治理服务](Computation_Governance_Services/README.md)。
+
diff --git a/Linkis-Doc-master/zh_CN/Deployment_Documents/Cluster_Deployment.md b/Linkis-Doc-master/zh_CN/Deployment_Documents/Cluster_Deployment.md
new file mode 100644
index 0000000..c863777
--- /dev/null
+++ b/Linkis-Doc-master/zh_CN/Deployment_Documents/Cluster_Deployment.md
@@ -0,0 +1,100 @@
+分布式部署方案介绍
+==================
+
+Linkis的单机部署方式简单,但是不能用于生产环境,因为过多的进程在同一个服务器上会让服务器压力过大。 部署方案的选择,和公司的用户规模、用户使用习惯、集群同时使用人数都有关,一般来说,我们会以使用Linkis的同时使用人数和用户对执行引擎的偏好来做依据进行部署方式的选择。
+
+1、多节点部署方式参考
+---------------------
+
+Linkis1.0仍然保持着基于SpringCloud的微服务架构,其中每个微服务都支持多活的部署方案,当然不同的微服务在系统中承担的角色不一样,有的微服务调用频率很高,更可能会处于高负荷的情况,**在安装EngineConnManager的机器上,由于会启动用户的引擎进程,机器的内存负载会比较高,其他类型的微服务对机器的负载则相对不会很高,**对于这类微服务我们建议启动多个进行分布式部署,Linkis动态使用的总资源可以按照如下方式计算。
+
+**EngineConnManager**使用总资源 = 总内存 + 总核数 =
+
+**同时在线人数 \* (所有类型的引擎占用内存) \*单用户最高并发数+ 同时在线人数 \*
+(所有类型的引擎占用内存) \*单用户最高并发数**
+
+例如只使用spark、hive、python引擎且单用户最高并发数为1的情况下,同时使用人数50人,Spark的Driver内存1G,Hive
+Client内存1G,python client 1G,每个引擎都使用1个核,那么就是 50 \*(1+1+1)G \*
+1 + 50 \*(1+1+1)核\*1 = 150G 内存 + 150 CPU核数。
+
+分布式部署时微服务本身占用的内存可以按照每个2G计算,对于使用人数较多的情况下建议调大ps-publicservice的内存至6G,同时建议预留10G内存作为buffer。
+
+以下配置假设**每个用户同时启动两个引擎为例**,**对于64G内存的机器**,参考配置如下:
+
+-   同时在线人数10-50
+
+>   **服务器配置推荐**4台服务器,分别命名为S1,S2,S3,S4
+
+| Service              | Host name | Remark           |
+|----------------------|-----------|------------------|
+| cg-engineconnmanager | S1、S2    | 每台机器单独部署 |
+| Other services       | S3、S4    | Eureka高可用部署 |
+
+-   同时在线人数50-100
+
... 8746 lines suppressed ...

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@linkis.apache.org
For additional commands, e-mail: commits-help@linkis.apache.org